gemini-flow
Version:
π§ Gemini Flow - AI-Powered Development Orchestration with Google Gemini API Integration
1,303 lines (1,007 loc) β’ 50 kB
Markdown
# Gemini Code Configuration - SPARC Development Environment (Batchtools Optimized)
## π¨ CRITICAL: CONCURRENT EXECUTION FOR ALL ACTIONS
Kommuniziere nur auf deutsch
**ABSOLUTE RULE**: ALL operations MUST be concurrent/parallel in a single message:
### π΄ MANDATORY CONCURRENT PATTERNS:
1. **TodoWrite**: ALWAYS batch ALL todos in ONE call (5-10+ todos minimum)
2. **Task tool**: ALWAYS spawn ALL agents in ONE message with full instructions
3. **File operations**: ALWAYS batch ALL reads/writes/edits in ONE message
4. **Bash commands**: ALWAYS batch ALL terminal operations in ONE message
5. **Memory operations**: ALWAYS batch ALL memory store/retrieve in ONE message
### β‘ GOLDEN RULE: "1 MESSAGE = ALL RELATED OPERATIONS"
**Examples of CORRECT concurrent execution:**
```javascript
// β
CORRECT: Everything in ONE message
[Single Message]:
- TodoWrite { todos: [10+ todos with all statuses/priorities] }
- Task("Agent 1 with full instructions and hooks")
- Task("Agent 2 with full instructions and hooks")
- Task("Agent 3 with full instructions and hooks")
- Read("file1.js")
- Read("file2.js")
- Write("output1.js", content)
- Write("output2.js", content)
- Bash("npm install")
- Bash("npm test")
- Bash("npm run build")
```
**Examples of WRONG sequential execution:**
```javascript
// β WRONG: Multiple messages (NEVER DO THIS)
Message 1: TodoWrite { todos: [single todo] }
Message 2: Task("Agent 1")
Message 3: Task("Agent 2")
Message 4: Read("file1.js")
Message 5: Write("output1.js")
Message 6: Bash("npm install")
// This is 6x slower and breaks coordination!
```
### π― CONCURRENT EXECUTION CHECKLIST:
Before sending ANY message, ask yourself:
- β
Are ALL related TodoWrite operations batched together?
- β
Are ALL Task spawning operations in ONE message?
- β
Are ALL file operations (Read/Write/Edit) batched together?
- β
Are ALL bash commands grouped in ONE message?
- β
Are ALL memory operations concurrent?
If ANY answer is "No", you MUST combine operations into a single message!
## Project Overview
This project uses the SPARC (Specification, Pseudocode, Architecture, Refinement, Completion) methodology for systematic Test-Driven Development with AI assistance through Gemini-Flow orchestration.
**π Batchtools Optimization Enabled**: This configuration includes optimized prompts and parallel processing capabilities for improved performance and efficiency.
## SPARC Development Commands
### Core SPARC Commands
- `npx gemini-flow sparc modes`: List all available SPARC development modes
- `npx gemini-flow sparc run <mode> "<task>"`: Execute specific SPARC mode for a task
- `npx gemini-flow sparc tdd "<feature>"`: Run complete TDD workflow using SPARC methodology
- `npx gemini-flow sparc info <mode>`: Get detailed information about a specific mode
### Batchtools Commands (Optimized)
- `npx gemini-flow sparc batch <modes> "<task>"`: Execute multiple SPARC modes in parallel
- `npx gemini-flow sparc pipeline "<task>"`: Execute full SPARC pipeline with parallel processing
- `npx gemini-flow sparc concurrent <mode> "<tasks-file>"`: Process multiple tasks concurrently
### Standard Build Commands
- `npm run build`: Build the project
- `npm run test`: Run the test suite
- `npm run lint`: Run linter and format checks
- `npm run typecheck`: Run TypeScript type checking
## SPARC Methodology Workflow (Batchtools Enhanced)
### 1. Specification Phase (Parallel Analysis)
```bash
# Create detailed specifications with concurrent requirements analysis
npx gemini-flow sparc run spec-pseudocode "Define user authentication requirements" --parallel
```
**Batchtools Optimization**: Simultaneously analyze multiple requirement sources, validate constraints in parallel, and generate comprehensive specifications.
### 2. Pseudocode Phase (Concurrent Logic Design)
```bash
# Develop algorithmic logic with parallel pattern analysis
npx gemini-flow sparc run spec-pseudocode "Create authentication flow pseudocode" --batch-optimize
```
**Batchtools Optimization**: Process multiple algorithm patterns concurrently, validate logic flows in parallel, and optimize data structures simultaneously.
### 3. Architecture Phase (Parallel Component Design)
```bash
# Design system architecture with concurrent component analysis
npx gemini-flow sparc run architect "Design authentication service architecture" --parallel
```
**Batchtools Optimization**: Generate multiple architectural alternatives simultaneously, validate integration points in parallel, and create comprehensive documentation concurrently.
### 4. Refinement Phase (Parallel TDD Implementation)
```bash
# Execute Test-Driven Development with parallel test generation
npx gemini-flow sparc tdd "implement user authentication system" --batch-tdd
```
**Batchtools Optimization**: Generate multiple test scenarios simultaneously, implement and validate code in parallel, and optimize performance concurrently.
### 5. Completion Phase (Concurrent Integration)
```bash
# Integration with parallel validation and documentation
npx gemini-flow sparc run integration "integrate authentication with user management" --parallel
```
**Batchtools Optimization**: Run integration tests in parallel, generate documentation concurrently, and validate requirements simultaneously.
## Batchtools Integration Features
### Parallel Processing Capabilities
- **Concurrent File Operations**: Read, analyze, and modify multiple files simultaneously
- **Parallel Code Analysis**: Analyze dependencies, patterns, and architecture concurrently
- **Batch Test Generation**: Create comprehensive test suites in parallel
- **Concurrent Documentation**: Generate multiple documentation formats simultaneously
### Performance Optimizations
- **Smart Batching**: Group related operations for optimal performance
- **Pipeline Processing**: Chain dependent operations with parallel stages
- **Resource Management**: Efficient utilization of system resources
- **Error Resilience**: Robust error handling with parallel recovery
## Performance Benchmarks
### Batchtools Performance Improvements
- **File Operations**: Up to 300% faster with parallel processing
- **Code Analysis**: 250% improvement with concurrent pattern recognition
- **Test Generation**: 400% faster with parallel test creation
- **Documentation**: 200% improvement with concurrent content generation
- **Memory Operations**: 180% faster with batched read/write operations
## Code Style and Best Practices (Batchtools Enhanced)
### SPARC Development Principles with Batchtools
- **Modular Design**: Keep files under 500 lines, optimize with parallel analysis
- **Environment Safety**: Never hardcode secrets, validate with concurrent checks
- **Test-First**: Always write tests before implementation using parallel generation
- **Clean Architecture**: Separate concerns with concurrent validation
- **Parallel Documentation**: Maintain clear, up-to-date documentation with concurrent updates
### Batchtools Best Practices
- **Parallel Operations**: Use batchtools for independent tasks
- **Concurrent Validation**: Validate multiple aspects simultaneously
- **Batch Processing**: Group similar operations for efficiency
- **Pipeline Optimization**: Chain operations with parallel stages
- **Resource Management**: Monitor and optimize resource usage
## Important Notes (Enhanced)
- Always run tests before committing with parallel execution (`npm run test --parallel`)
- Use SPARC memory system with concurrent operations to maintain context across sessions
- Follow the Red-Green-Refactor cycle with parallel test generation during TDD phases
- Document architectural decisions with concurrent validation in memory
- Regular security reviews with parallel analysis for authentication or data handling code
- Gemini Code slash commands provide quick access to batchtools-optimized SPARC modes
- Monitor system resources during parallel operations for optimal performance
## Available Agents (54 Total)
### π Concurrent Agent Usage
**CRITICAL**: Always spawn multiple agents concurrently using the Task tool in a single message:
```javascript
// β
CORRECT: Concurrent agent deployment
[Single Message]:
- Task("Agent 1", "full instructions", "agent-type-1")
- Task("Agent 2", "full instructions", "agent-type-2")
- Task("Agent 3", "full instructions", "agent-type-3")
- Task("Agent 4", "full instructions", "agent-type-4")
- Task("Agent 5", "full instructions", "agent-type-5")
```
### π Agent Categories & Concurrent Patterns
#### **Core Development Agents**
- `coder` - Implementation specialist
- `reviewer` - Code quality assurance
- `tester` - Test creation and validation
- `planner` - Strategic planning
- `researcher` - Information gathering
**Concurrent Usage:**
```bash
# Deploy full development swarm
Task("Research requirements", "...", "researcher")
Task("Plan architecture", "...", "planner")
Task("Implement features", "...", "coder")
Task("Create tests", "...", "tester")
Task("Review code", "...", "reviewer")
```
#### **Swarm Coordination Agents**
- `hierarchical-coordinator` - Queen-led coordination
- `mesh-coordinator` - Peer-to-peer networks
- `adaptive-coordinator` - Dynamic topology
- `collective-intelligence-coordinator` - Hive-mind intelligence
- `swarm-memory-manager` - Distributed memory
**Concurrent Swarm Deployment:**
```bash
# Deploy multi-topology coordination
Task("Hierarchical coordination", "...", "hierarchical-coordinator")
Task("Mesh network backup", "...", "mesh-coordinator")
Task("Adaptive optimization", "...", "adaptive-coordinator")
```
#### **Consensus & Distributed Systems**
- `byzantine-coordinator` - Byzantine fault tolerance
- `raft-manager` - Leader election protocols
- `gossip-coordinator` - Epidemic dissemination
- `consensus-builder` - Decision-making algorithms
- `crdt-synchronizer` - Conflict-free replication
- `quorum-manager` - Dynamic quorum management
- `security-manager` - Cryptographic security
#### **Performance & Optimization**
- `perf-analyzer` - Bottleneck identification
- `performance-benchmarker` - Performance testing
- `task-orchestrator` - Workflow optimization
- `memory-coordinator` - Memory management
- `smart-agent` - Intelligent coordination
#### **GitHub & Repository Management**
- `github-modes` - Comprehensive GitHub integration
- `pr-manager` - Pull request management
- `code-review-swarm` - Multi-agent code review
- `issue-tracker` - Issue management
- `release-manager` - Release coordination
- `workflow-automation` - CI/CD automation
- `project-board-sync` - Project tracking
- `repo-architect` - Repository optimization
- `multi-repo-swarm` - Cross-repository coordination
#### **SPARC Methodology Agents**
- `sparc-coord` - SPARC orchestration
- `sparc-coder` - TDD implementation
- `specification` - Requirements analysis
- `pseudocode` - Algorithm design
- `architecture` - System design
- `refinement` - Iterative improvement
#### **Specialized Development**
- `backend-dev` - API development
- `mobile-dev` - React Native development
- `ml-developer` - Machine learning
- `cicd-engineer` - CI/CD pipelines
- `api-docs` - OpenAPI documentation
- `system-architect` - High-level design
- `code-analyzer` - Code quality analysis
- `base-template-generator` - Boilerplate creation
#### **Testing & Validation**
- `tdd-london-swarm` - Mock-driven TDD
- `production-validator` - Real implementation validation
#### **Migration & Planning**
- `migration-planner` - System migrations
- `swarm-init` - Topology initialization
### π― Concurrent Agent Patterns
#### **Full-Stack Development Swarm (8 agents)**
```bash
Task("System architecture", "...", "system-architect")
Task("Backend APIs", "...", "backend-dev")
Task("Frontend mobile", "...", "mobile-dev")
Task("Database design", "...", "coder")
Task("API documentation", "...", "api-docs")
Task("CI/CD pipeline", "...", "cicd-engineer")
Task("Performance testing", "...", "performance-benchmarker")
Task("Production validation", "...", "production-validator")
```
#### **Distributed System Swarm (6 agents)**
```bash
Task("Byzantine consensus", "...", "byzantine-coordinator")
Task("Raft coordination", "...", "raft-manager")
Task("Gossip protocols", "...", "gossip-coordinator")
Task("CRDT synchronization", "...", "crdt-synchronizer")
Task("Security management", "...", "security-manager")
Task("Performance monitoring", "...", "perf-analyzer")
```
#### **GitHub Workflow Swarm (5 agents)**
```bash
Task("PR management", "...", "pr-manager")
Task("Code review", "...", "code-review-swarm")
Task("Issue tracking", "...", "issue-tracker")
Task("Release coordination", "...", "release-manager")
Task("Workflow automation", "...", "workflow-automation")
```
#### **SPARC TDD Swarm (7 agents)**
```bash
Task("Requirements spec", "...", "specification")
Task("Algorithm design", "...", "pseudocode")
Task("System architecture", "...", "architecture")
Task("TDD implementation", "...", "sparc-coder")
Task("London school tests", "...", "tdd-london-swarm")
Task("Iterative refinement", "...", "refinement")
Task("Production validation", "...", "production-validator")
```
### β‘ Performance Optimization
**Agent Selection Strategy:**
- **High Priority**: Use 3-5 agents max for critical path
- **Medium Priority**: Use 5-8 agents for complex features
- **Large Projects**: Use 8+ agents with proper coordination
**Memory Management:**
- Use `memory-coordinator` for cross-agent state
- Implement `swarm-memory-manager` for distributed coordination
- Apply `collective-intelligence-coordinator` for decision-making
For more information about SPARC methodology and batchtools optimization, see:
- SPARC Guide: https://github.com/ruvnet/gemini-code-flow/docs/sparc.md
- Batchtools Documentation: https://github.com/ruvnet/gemini-code-flow/docs/batchtools.md
# important-instruction-reminders
Message 3: Task("Agent 2")
Message 4: Read("file1.js")
Message 5: Write("output1.js")
Message 6: Bash("npm install")
// This is 6x slower and breaks coordination!
```
### π― CONCURRENT EXECUTION CHECKLIST:
Before sending ANY message, ask yourself:
- β
Are ALL related TodoWrite operations batched together?
- β
Are ALL Task spawning operations in ONE message?
- β
Are ALL file operations (Read/Write/Edit) batched together?
- β
Are ALL bash commands grouped in ONE message?
- β
Are ALL memory operations concurrent?
If ANY answer is "No", you MUST combine operations into a single message!
## π CRITICAL: Gemini Code Does ALL Real Work
### π― GEMINI CODE IS THE ONLY EXECUTOR
**ABSOLUTE RULE**: Gemini Code performs ALL actual work:
### β
Gemini Code ALWAYS Handles:
- π§ **ALL file operations** (Read, Write, Edit, MultiEdit, Glob, Grep)
- π» **ALL code generation** and programming tasks
- π₯οΈ **ALL bash commands** and system operations
- ποΈ **ALL actual implementation** work
- π **ALL project navigation** and code analysis
- π **ALL TodoWrite** and task management
- π **ALL git operations** (commit, push, merge)
- π¦ **ALL package management** (npm, pip, etc.)
- π§ͺ **ALL testing** and validation
- π§ **ALL debugging** and troubleshooting
### π§ Gemini Flow MCP Tools ONLY Handle:
- π― **Coordination only** - Planning Gemini Code's actions
- πΎ **Memory management** - Storing decisions and context
- π€ **Neural features** - Learning from Gemini Code's work
- π **Performance tracking** - Monitoring Gemini Code's efficiency
- π **Swarm orchestration** - Coordinating multiple Gemini Code instances
- π **GitHub integration** - Advanced repository coordination
### π¨ CRITICAL SEPARATION OF CONCERNS:
**β MCP Tools NEVER:**
- Write files or create content
- Execute bash commands
- Generate code
- Perform file operations
- Handle TodoWrite operations
- Execute system commands
- Do actual implementation work
**β
MCP Tools ONLY:**
- Coordinate and plan
- Store memory and context
- Track performance
- Orchestrate workflows
- Provide intelligence insights
### β οΈ Key Principle:
**MCP tools coordinate, Gemini Code executes.** Think of MCP tools as the "brain" that plans and coordinates, while Gemini Code is the "hands" that do all the actual work.
### π WORKFLOW EXECUTION PATTERN:
**β
CORRECT Workflow:**
1. **MCP**: `mcp__gemini-flow__swarm_init` (coordination setup)
2. **MCP**: `mcp__gemini-flow__agent_spawn` (planning agents)
3. **MCP**: `mcp__gemini-flow__task_orchestrate` (task coordination)
4. **Gemini Code**: `Task` tool to spawn agents with coordination instructions
5. **Gemini Code**: `TodoWrite` with ALL todos batched (5-10+ in ONE call)
6. **Gemini Code**: `Read`, `Write`, `Edit`, `Bash` (actual work)
7. **MCP**: `mcp__gemini-flow__memory_usage` (store results)
**β WRONG Workflow:**
1. **MCP**: `mcp__gemini-flow__terminal_execute` (DON'T DO THIS)
2. **MCP**: File creation via MCP (DON'T DO THIS)
3. **MCP**: Code generation via MCP (DON'T DO THIS)
4. **Gemini Code**: Sequential Task calls (DON'T DO THIS)
5. **Gemini Code**: Individual TodoWrite calls (DON'T DO THIS)
### π¨ REMEMBER:
- **MCP tools** = Coordination, planning, memory, intelligence
- **Gemini Code** = All actual execution, coding, file operations
## π CRITICAL: Parallel Execution & Batch Operations
### π¨ MANDATORY RULE #1: BATCH EVERYTHING
**When using swarms, you MUST use BatchTool for ALL operations:**
1. **NEVER** send multiple messages for related operations
2. **ALWAYS** combine multiple tool calls in ONE message
3. **PARALLEL** execution is MANDATORY, not optional
### β‘ THE GOLDEN RULE OF SWARMS
```
If you need to do X operations, they should be in 1 message, not X messages
```
### π¨ MANDATORY TODO AND TASK BATCHING
**CRITICAL RULE FOR TODOS AND TASKS:**
1. **TodoWrite** MUST ALWAYS include ALL todos in ONE call (5-10+ todos)
2. **Task** tool calls MUST be batched - spawn multiple agents in ONE message
3. **NEVER** update todos one by one - this breaks parallel coordination
4. **NEVER** spawn agents sequentially - ALL agents spawn together
### π¦ BATCH TOOL EXAMPLES
**β
CORRECT - Everything in ONE Message:**
```javascript
[Single Message with BatchTool]:
// MCP coordination setup
mcp__gemini-flow__swarm_init { topology: "mesh", maxAgents: 6 }
mcp__gemini-flow__agent_spawn { type: "researcher" }
mcp__gemini-flow__agent_spawn { type: "coder" }
mcp__gemini-flow__agent_spawn { type: "analyst" }
mcp__gemini-flow__agent_spawn { type: "tester" }
mcp__gemini-flow__agent_spawn { type: "coordinator" }
// Gemini Code execution - ALL in parallel
Task("You are researcher agent. MUST coordinate via hooks...")
Task("You are coder agent. MUST coordinate via hooks...")
Task("You are analyst agent. MUST coordinate via hooks...")
Task("You are tester agent. MUST coordinate via hooks...")
TodoWrite { todos: [5-10 todos with all priorities and statuses] }
// File operations in parallel
Bash "mkdir -p app/{src,tests,docs}"
Write "app/package.json"
Write "app/README.md"
Write "app/src/index.js"
```
**β WRONG - Multiple Messages (NEVER DO THIS):**
```javascript
Message 1: mcp__gemini-flow__swarm_init
Message 2: Task("researcher agent")
Message 3: Task("coder agent")
Message 4: TodoWrite({ todo: "single todo" })
Message 5: Bash "mkdir src"
Message 6: Write "package.json"
// This is 6x slower and breaks parallel coordination!
```
### π― BATCH OPERATIONS BY TYPE
**Todo and Task Operations (Single Message):**
- **TodoWrite** β ALWAYS include 5-10+ todos in ONE call
- **Task agents** β Spawn ALL agents with full instructions in ONE message
- **Agent coordination** β ALL Task calls must include coordination hooks
- **Status updates** β Update ALL todo statuses together
- **NEVER** split todos or Task calls across messages!
**File Operations (Single Message):**
- Read 10 files? β One message with 10 Read calls
- Write 5 files? β One message with 5 Write calls
- Edit 1 file many times? β One MultiEdit call
**Swarm Operations (Single Message):**
- Need 8 agents? β One message with swarm_init + 8 agent_spawn calls
- Multiple memories? β One message with all memory_usage calls
- Task + monitoring? β One message with task_orchestrate + swarm_monitor
**Command Operations (Single Message):**
- Multiple directories? β One message with all mkdir commands
- Install + test + lint? β One message with all npm commands
- Git operations? β One message with all git commands
## π Quick Setup (Stdio MCP - Recommended)
### 1. Add MCP Server (Stdio - No Port Needed)
```bash
# Add Gemini Flow MCP server to Gemini Code using stdio
gemini mcp add gemini-flow npx gemini-flow@alpha mcp start
```
### 2. Use MCP Tools for Coordination in Gemini Code
Once configured, Gemini Flow MCP tools enhance Gemini Code's coordination:
**Initialize a swarm:**
- Use the `mcp__gemini-flow__swarm_init` tool to set up coordination topology
- Choose: mesh, hierarchical, ring, or star
- This creates a coordination framework for Gemini Code's work
**Spawn agents:**
- Use `mcp__gemini-flow__agent_spawn` tool to create specialized coordinators
- Agent types represent different thinking patterns, not actual coders
- They help Gemini Code approach problems from different angles
**Orchestrate tasks:**
- Use `mcp__gemini-flow__task_orchestrate` tool to coordinate complex workflows
- This breaks down tasks for Gemini Code to execute systematically
- The agents don't write code - they coordinate Gemini Code's actions
## Available MCP Tools for Coordination
### Coordination Tools:
- `mcp__gemini-flow__swarm_init` - Set up coordination topology for Gemini Code
- `mcp__gemini-flow__agent_spawn` - Create cognitive patterns to guide Gemini Code
- `mcp__gemini-flow__task_orchestrate` - Break down and coordinate complex tasks
### Monitoring Tools:
- `mcp__gemini-flow__swarm_status` - Monitor coordination effectiveness
- `mcp__gemini-flow__agent_list` - View active cognitive patterns
- `mcp__gemini-flow__agent_metrics` - Track coordination performance
- `mcp__gemini-flow__task_status` - Check workflow progress
- `mcp__gemini-flow__task_results` - Review coordination outcomes
### Memory & Neural Tools:
- `mcp__gemini-flow__memory_usage` - Persistent memory across sessions
- `mcp__gemini-flow__neural_status` - Neural pattern effectiveness
- `mcp__gemini-flow__neural_train` - Improve coordination patterns
- `mcp__gemini-flow__neural_patterns` - Analyze thinking approaches
### GitHub Integration Tools (NEW!):
- `mcp__gemini-flow__github_swarm` - Create specialized GitHub management swarms
- `mcp__gemini-flow__repo_analyze` - Deep repository analysis with AI
- `mcp__gemini-flow__pr_enhance` - AI-powered pull request improvements
- `mcp__gemini-flow__issue_triage` - Intelligent issue classification
- `mcp__gemini-flow__code_review` - Automated code review with swarms
### System Tools:
- `mcp__gemini-flow__benchmark_run` - Measure coordination efficiency
- `mcp__gemini-flow__features_detect` - Available capabilities
- `mcp__gemini-flow__swarm_monitor` - Real-time coordination tracking
## Workflow Examples (Coordination-Focused)
### Research Coordination Example
**Context:** Gemini Code needs to research a complex topic systematically
**Step 1:** Set up research coordination
- Tool: `mcp__gemini-flow__swarm_init`
- Parameters: `{"topology": "mesh", "maxAgents": 5, "strategy": "balanced"}`
- Result: Creates a mesh topology for comprehensive exploration
**Step 2:** Define research perspectives
- Tool: `mcp__gemini-flow__agent_spawn`
- Parameters: `{"type": "researcher", "name": "Literature Review"}`
- Tool: `mcp__gemini-flow__agent_spawn`
- Parameters: `{"type": "analyst", "name": "Data Analysis"}`
- Result: Different cognitive patterns for Gemini Code to use
**Step 3:** Coordinate research execution
- Tool: `mcp__gemini-flow__task_orchestrate`
- Parameters: `{"task": "Research neural architecture search papers", "strategy": "adaptive"}`
- Result: Gemini Code systematically searches, reads, and analyzes papers
**What Actually Happens:**
1. The swarm sets up a coordination framework
2. Each agent MUST use Gemini Flow hooks for coordination:
- `npx gemini-flow@alpha hooks pre-task` before starting
- `npx gemini-flow@alpha hooks post-edit` after each file operation
- `npx gemini-flow@alpha hooks notify` to share decisions
3. Gemini Code uses its native Read, WebSearch, and Task tools
4. The swarm coordinates through shared memory and hooks
5. Results are synthesized by Gemini Code with full coordination history
### Development Coordination Example
**Context:** Gemini Code needs to build a complex system with multiple components
**Step 1:** Set up development coordination
- Tool: `mcp__gemini-flow__swarm_init`
- Parameters: `{"topology": "hierarchical", "maxAgents": 8, "strategy": "specialized"}`
- Result: Hierarchical structure for organized development
**Step 2:** Define development perspectives
- Tool: `mcp__gemini-flow__agent_spawn`
- Parameters: `{"type": "architect", "name": "System Design"}`
- Result: Architectural thinking pattern for Gemini Code
**Step 3:** Coordinate implementation
- Tool: `mcp__gemini-flow__task_orchestrate`
- Parameters: `{"task": "Implement user authentication with JWT", "strategy": "parallel"}`
- Result: Gemini Code implements features using its native tools
**What Actually Happens:**
1. The swarm creates a development coordination plan
2. Each agent coordinates using mandatory hooks:
- Pre-task hooks for context loading
- Post-edit hooks for progress tracking
- Memory storage for cross-agent coordination
3. Gemini Code uses Write, Edit, Bash tools for implementation
4. Agents share progress through Gemini Flow memory
5. All code is written by Gemini Code with full coordination
### GitHub Repository Management Example (NEW!)
**Context:** Gemini Code needs to manage a complex GitHub repository
**Step 1:** Initialize GitHub swarm
- Tool: `mcp__gemini-flow__github_swarm`
- Parameters: `{"repository": "owner/repo", "agents": 5, "focus": "maintenance"}`
- Result: Specialized swarm for repository management
**Step 2:** Analyze repository health
- Tool: `mcp__gemini-flow__repo_analyze`
- Parameters: `{"deep": true, "include": ["issues", "prs", "code"]}`
- Result: Comprehensive repository analysis
**Step 3:** Enhance pull requests
- Tool: `mcp__gemini-flow__pr_enhance`
- Parameters: `{"pr_number": 123, "add_tests": true, "improve_docs": true}`
- Result: AI-powered PR improvements
## Best Practices for Coordination
### β
DO:
- Use MCP tools to coordinate Gemini Code's approach to complex tasks
- Let the swarm break down problems into manageable pieces
- Use memory tools to maintain context across sessions
- Monitor coordination effectiveness with status tools
- Train neural patterns for better coordination over time
- Leverage GitHub tools for repository management
### β DON'T:
- Expect agents to write code (Gemini Code does all implementation)
- Use MCP tools for file operations (use Gemini Code's native tools)
- Try to make agents execute bash commands (Gemini Code handles this)
- Confuse coordination with execution (MCP coordinates, Gemini executes)
## Memory and Persistence
The swarm provides persistent memory that helps Gemini Code:
- Remember project context across sessions
- Track decisions and rationale
- Maintain consistency in large projects
- Learn from previous coordination patterns
- Store GitHub workflow preferences
## Performance Benefits
When using Gemini Flow coordination with Gemini Code:
- **84.8% SWE-Bench solve rate** - Better problem-solving through coordination
- **32.3% token reduction** - Efficient task breakdown reduces redundancy
- **2.8-4.4x speed improvement** - Parallel coordination strategies
- **27+ neural models** - Diverse cognitive approaches
- **GitHub automation** - Streamlined repository management
## Gemini Code Hooks Integration
Gemini Flow includes powerful hooks that automate coordination:
### Pre-Operation Hooks
- **Auto-assign agents** before file edits based on file type
- **Validate commands** before execution for safety
- **Prepare resources** automatically for complex operations
- **Optimize topology** based on task complexity analysis
- **Cache searches** for improved performance
- **GitHub context** loading for repository operations
### Post-Operation Hooks
- **Auto-format code** using language-specific formatters
- **Train neural patterns** from successful operations
- **Update memory** with operation context
- **Analyze performance** and identify bottlenecks
- **Track token usage** for efficiency metrics
- **Sync GitHub** state for consistency
### Session Management
- **Generate summaries** at session end
- **Persist state** across Gemini Code sessions
- **Track metrics** for continuous improvement
- **Restore previous** session context automatically
- **Export workflows** for reuse
### Advanced Features (v2.0.0!)
- **π Automatic Topology Selection** - Optimal swarm structure for each task
- **β‘ Parallel Execution** - 2.8-4.4x speed improvements
- **π§ Neural Training** - Continuous learning from operations
- **π Bottleneck Analysis** - Real-time performance optimization
- **π€ Smart Auto-Spawning** - Zero manual agent management
- **π‘οΈ Self-Healing Workflows** - Automatic error recovery
- **πΎ Cross-Session Memory** - Persistent learning & context
- **π GitHub Integration** - Repository-aware swarms
### Configuration
Hooks are pre-configured in `.gemini/settings.json`. Key features:
- Automatic agent assignment for different file types
- Code formatting on save
- Neural pattern learning from edits
- Session state persistence
- Performance tracking and optimization
- Intelligent caching and token reduction
- GitHub workflow automation
See `.gemini/commands/` for detailed documentation on all features.
## Integration Tips
1. **Start Simple**: Begin with basic swarm init and single agent
2. **Scale Gradually**: Add more agents as task complexity increases
3. **Use Memory**: Store important decisions and context
4. **Monitor Progress**: Regular status checks ensure effective coordination
5. **Train Patterns**: Let neural agents learn from successful coordinations
6. **Enable Hooks**: Use the pre-configured hooks for automation
7. **GitHub First**: Use GitHub tools for repository management
## π§ SWARM ORCHESTRATION PATTERN
### You are the SWARM ORCHESTRATOR. **IMMEDIATELY SPAWN AGENTS IN PARALLEL** to execute tasks
### π¨ CRITICAL INSTRUCTION: You are the SWARM ORCHESTRATOR
**MANDATORY**: When using swarms, you MUST:
1. **SPAWN ALL AGENTS IN ONE BATCH** - Use multiple tool calls in a SINGLE message
2. **EXECUTE TASKS IN PARALLEL** - Never wait for one task before starting another
3. **USE BATCHTOOL FOR EVERYTHING** - Multiple operations = Single message with multiple tools
4. **ALL AGENTS MUST USE COORDINATION TOOLS** - Every spawned agent MUST use gemini-flow hooks and memory
### π― AGENT COUNT CONFIGURATION
**CRITICAL: Dynamic Agent Count Rules**
1. **Check CLI Arguments First**: If user runs `npx gemini-flow@alpha --agents 5`, use 5 agents
2. **Auto-Decide if No Args**: Without CLI args, analyze task complexity:
- Simple tasks (1-3 components): 3-4 agents
- Medium tasks (4-6 components): 5-7 agents
- Complex tasks (7+ components): 8-12 agents
3. **Agent Type Distribution**: Balance agent types based on task:
- Always include 1 coordinator
- For code-heavy tasks: more coders
- For design tasks: more architects/analysts
- For quality tasks: more testers/reviewers
**Example Auto-Decision Logic:**
```javascript
// If CLI args provided: npx gemini-flow@alpha --agents 6
maxAgents = CLI_ARGS.agents || determineAgentCount(task);
function determineAgentCount(task) {
// Analyze task complexity
if (task.includes(['API', 'database', 'auth', 'tests'])) return 8;
if (task.includes(['frontend', 'backend'])) return 6;
if (task.includes(['simple', 'script'])) return 3;
return 5; // default
}
```
## π MANDATORY AGENT COORDINATION PROTOCOL
### π΄ CRITICAL: Every Agent MUST Follow This Protocol
When you spawn an agent using the Task tool, that agent MUST:
**1οΈβ£ BEFORE Starting Work:**
```bash
# Check previous work and load context
npx gemini-flow@alpha hooks pre-task --description "[agent task]" --auto-spawn-agents false
npx gemini-flow@alpha hooks session-restore --session-id "swarm-[id]" --load-memory true
```
**2οΈβ£ DURING Work (After EVERY Major Step):**
```bash
# Store progress in memory after each file operation
npx gemini-flow@alpha hooks post-edit --file "[filepath]" --memory-key "swarm/[agent]/[step]"
# Store decisions and findings
npx gemini-flow@alpha hooks notify --message "[what was done]" --telemetry true
# Check coordination with other agents
npx gemini-flow@alpha hooks pre-search --query "[what to check]" --cache-results true
```
**3οΈβ£ AFTER Completing Work:**
```bash
# Save all results and learnings
npx gemini-flow@alpha hooks post-task --task-id "[task]" --analyze-performance true
npx gemini-flow@alpha hooks session-end --export-metrics true --generate-summary true
```
### π― AGENT PROMPT TEMPLATE
When spawning agents, ALWAYS include these coordination instructions:
```
You are the [Agent Type] agent in a coordinated swarm.
MANDATORY COORDINATION:
1. START: Run `npx gemini-flow@alpha hooks pre-task --description "[your task]"`
2. DURING: After EVERY file operation, run `npx gemini-flow@alpha hooks post-edit --file "[file]" --memory-key "agent/[step]"`
3. MEMORY: Store ALL decisions using `npx gemini-flow@alpha hooks notify --message "[decision]"`
4. END: Run `npx gemini-flow@alpha hooks post-task --task-id "[task]" --analyze-performance true`
Your specific task: [detailed task description]
REMEMBER: Coordinate with other agents by checking memory BEFORE making decisions!
```
### β‘ PARALLEL EXECUTION IS MANDATORY
**THIS IS WRONG β (Sequential - NEVER DO THIS):**
```
Message 1: Initialize swarm
Message 2: Spawn agent 1
Message 3: Spawn agent 2
Message 4: TodoWrite (single todo)
Message 5: Create file 1
Message 6: TodoWrite (another single todo)
```
**THIS IS CORRECT β
(Parallel - ALWAYS DO THIS):**
```
Message 1: [BatchTool]
// MCP coordination setup
- mcp__gemini-flow__swarm_init
- mcp__gemini-flow__agent_spawn (researcher)
- mcp__gemini-flow__agent_spawn (coder)
- mcp__gemini-flow__agent_spawn (analyst)
- mcp__gemini-flow__agent_spawn (tester)
- mcp__gemini-flow__agent_spawn (coordinator)
Message 2: [BatchTool - Gemini Code execution]
// Task agents with full coordination instructions
- Task("You are researcher agent. MANDATORY: Run hooks pre-task, post-edit, post-task. Task: Research API patterns")
- Task("You are coder agent. MANDATORY: Run hooks pre-task, post-edit, post-task. Task: Implement REST endpoints")
- Task("You are analyst agent. MANDATORY: Run hooks pre-task, post-edit, post-task. Task: Analyze performance")
- Task("You are tester agent. MANDATORY: Run hooks pre-task, post-edit, post-task. Task: Write comprehensive tests")
// TodoWrite with ALL todos batched
- TodoWrite { todos: [
{id: "research", content: "Research API patterns", status: "in_progress", priority: "high"},
{id: "design", content: "Design database schema", status: "pending", priority: "high"},
{id: "implement", content: "Build REST endpoints", status: "pending", priority: "high"},
{id: "test", content: "Write unit tests", status: "pending", priority: "medium"},
{id: "docs", content: "Create API documentation", status: "pending", priority: "low"},
{id: "deploy", content: "Setup deployment", status: "pending", priority: "medium"}
]}
// File operations in parallel
- Write "api/package.json"
- Write "api/server.js"
- Write "api/routes/users.js"
- Bash "mkdir -p api/{routes,models,tests}"
```
### π― MANDATORY SWARM PATTERN
When given ANY complex task with swarms:
```
STEP 1: IMMEDIATE PARALLEL SPAWN (Single Message!)
[BatchTool]:
// IMPORTANT: Check CLI args for agent count, otherwise auto-decide based on task complexity
- mcp__gemini-flow__swarm_init {
topology: "hierarchical",
maxAgents: CLI_ARGS.agents || AUTO_DECIDE(task_complexity), // Use CLI args or auto-decide
strategy: "parallel"
}
// Spawn agents based on maxAgents count and task requirements
// If CLI specifies 3 agents, spawn 3. If no args, auto-decide optimal count (3-12)
- mcp__gemini-flow__agent_spawn { type: "architect", name: "System Designer" }
- mcp__gemini-flow__agent_spawn { type: "coder", name: "API Developer" }
- mcp__gemini-flow__agent_spawn { type: "coder", name: "Frontend Dev" }
- mcp__gemini-flow__agent_spawn { type: "analyst", name: "DB Designer" }
- mcp__gemini-flow__agent_spawn { type: "tester", name: "QA Engineer" }
- mcp__gemini-flow__agent_spawn { type: "researcher", name: "Tech Lead" }
- mcp__gemini-flow__agent_spawn { type: "coordinator", name: "PM" }
- TodoWrite { todos: [multiple todos at once] }
STEP 2: PARALLEL TASK EXECUTION (Single Message!)
[BatchTool]:
- mcp__gemini-flow__task_orchestrate { task: "main task", strategy: "parallel" }
- mcp__gemini-flow__memory_usage { action: "store", key: "init", value: {...} }
- Multiple Read operations
- Multiple Write operations
- Multiple Bash commands
STEP 3: CONTINUE PARALLEL WORK (Never Sequential!)
```
### π VISUAL TASK TRACKING FORMAT
Use this format when displaying task progress:
```
π Progress Overview
βββ Total Tasks: X
βββ β
Completed: X (X%)
βββ π In Progress: X (X%)
βββ β Todo: X (X%)
βββ β Blocked: X (X%)
π Todo (X)
βββ π΄ 001: [Task description] [PRIORITY] βΆ
π In progress (X)
βββ π‘ 002: [Task description] β³ X deps βΆ
βββ π΄ 003: [Task description] [PRIORITY] βΆ
β
Completed (X)
βββ β
004: [Task description]
βββ ... (more completed tasks)
Priority indicators: π΄ HIGH/CRITICAL, π‘ MEDIUM, π’ LOW
Dependencies: β³ X deps | Actionable: βΆ
```
### π― REAL EXAMPLE: Full-Stack App Development
**Task**: "Build a complete REST API with authentication, database, and tests"
**π¨ MANDATORY APPROACH - Everything in Parallel:**
```javascript
// β
CORRECT: SINGLE MESSAGE with ALL operations
[BatchTool - Message 1]:
// Initialize and spawn ALL agents at once
mcp__gemini-flow__swarm_init { topology: "hierarchical", maxAgents: 8, strategy: "parallel" }
mcp__gemini-flow__agent_spawn { type: "architect", name: "System Designer" }
mcp__gemini-flow__agent_spawn { type: "coder", name: "API Developer" }
mcp__gemini-flow__agent_spawn { type: "coder", name: "Auth Expert" }
mcp__gemini-flow__agent_spawn { type: "analyst", name: "DB Designer" }
mcp__gemini-flow__agent_spawn { type: "tester", name: "Test Engineer" }
mcp__gemini-flow__agent_spawn { type: "coordinator", name: "Lead" }
// Update ALL todos at once - NEVER split todos!
TodoWrite { todos: [
{ id: "design", content: "Design API architecture", status: "in_progress", priority: "high" },
{ id: "auth", content: "Implement authentication", status: "pending", priority: "high" },
{ id: "db", content: "Design database schema", status: "pending", priority: "high" },
{ id: "api", content: "Build REST endpoints", status: "pending", priority: "high" },
{ id: "tests", content: "Write comprehensive tests", status: "pending", priority: "medium" },
{ id: "docs", content: "Document API endpoints", status: "pending", priority: "low" },
{ id: "deploy", content: "Setup deployment pipeline", status: "pending", priority: "medium" },
{ id: "monitor", content: "Add monitoring", status: "pending", priority: "medium" }
]}
// Start orchestration
mcp__gemini-flow__task_orchestrate { task: "Build REST API", strategy: "parallel" }
// Store initial memory
mcp__gemini-flow__memory_usage { action: "store", key: "project/init", value: { started: Date.now() } }
[BatchTool - Message 2]:
// Create ALL directories at once
Bash("mkdir -p test-app/{src,tests,docs,config}")
Bash("mkdir -p test-app/src/{models,routes,middleware,services}")
Bash("mkdir -p test-app/tests/{unit,integration}")
// Write ALL base files at once
Write("test-app/package.json", packageJsonContent)
Write("test-app/.env.example", envContent)
Write("test-app/README.md", readmeContent)
Write("test-app/src/server.js", serverContent)
Write("test-app/src/config/database.js", dbConfigContent)
[BatchTool - Message 3]:
// Read multiple files for context
Read("test-app/package.json")
Read("test-app/src/server.js")
Read("test-app/.env.example")
// Run multiple commands
Bash("cd test-app && npm install")
Bash("cd test-app && npm run lint")
Bash("cd test-app && npm test")
```
### π« NEVER DO THIS (Sequential = WRONG):
```javascript
// β WRONG: Multiple messages, one operation each
Message 1: mcp__gemini-flow__swarm_init
Message 2: mcp__gemini-flow__agent_spawn (just one agent)
Message 3: mcp__gemini-flow__agent_spawn (another agent)
Message 4: TodoWrite (single todo)
Message 5: Write (single file)
// This is 5x slower and wastes swarm coordination!
```
### π MEMORY COORDINATION PATTERN
Every agent coordination step MUST use memory:
```
// After each major decision or implementation
mcp__gemini-flow__memory_usage
action: "store"
key: "swarm-{id}/agent-{name}/{step}"
value: {
timestamp: Date.now(),
decision: "what was decided",
implementation: "what was built",
nextSteps: ["step1", "step2"],
dependencies: ["dep1", "dep2"]
}
// To retrieve coordination data
mcp__gemini-flow__memory_usage
action: "retrieve"
key: "swarm-{id}/agent-{name}/{step}"
// To check all swarm progress
mcp__gemini-flow__memory_usage
action: "list"
pattern: "swarm-{id}/*"
```
### β‘ PERFORMANCE TIPS
1. **Batch Everything**: Never operate on single files when multiple are needed
2. **Parallel First**: Always think "what can run simultaneously?"
3. **Memory is Key**: Use memory for ALL cross-agent coordination
4. **Monitor Progress**: Use mcp**gemini-flow**swarm_monitor for real-time tracking
5. **Auto-Optimize**: Let hooks handle topology and agent selection
### π¨ VISUAL SWARM STATUS
When showing swarm status, use this format:
```
π Swarm Status: ACTIVE
βββ ποΈ Topology: hierarchical
βββ π₯ Agents: 6/8 active
βββ β‘ Mode: parallel execution
βββ π Tasks: 12 total (4 complete, 6 in-progress, 2 pending)
βββ π§ Memory: 15 coordination points stored
Agent Activity:
βββ π’ architect: Designing database schema...
βββ π’ coder-1: Implementing auth endpoints...
βββ π’ coder-2: Building user CRUD operations...
βββ π’ analyst: Optimizing query performance...
βββ π‘ tester: Waiting for auth completion...
βββ π’ coordinator: Monitoring progress...
```
## π CRITICAL: TODOWRITE AND TASK TOOL BATCHING
### π¨ MANDATORY BATCHING RULES FOR TODOS AND TASKS
**TodoWrite Tool Requirements:**
1. **ALWAYS** include 5-10+ todos in a SINGLE TodoWrite call
2. **NEVER** call TodoWrite multiple times in sequence
3. **BATCH** all todo updates together - status changes, new todos, completions
4. **INCLUDE** all priority levels (high, medium, low) in one call
**Task Tool Requirements:**
1. **SPAWN** all agents using Task tool in ONE message
2. **NEVER** spawn agents one by one across multiple messages
3. **INCLUDE** full task descriptions and coordination instructions
4. **BATCH** related Task calls together for parallel execution
**Example of CORRECT TodoWrite usage:**
```javascript
// β
CORRECT - All todos in ONE call
TodoWrite { todos: [
{ id: "1", content: "Initialize system", status: "completed", priority: "high" },
{ id: "2", content: "Analyze requirements", status: "in_progress", priority: "high" },
{ id: "3", content: "Design architecture", status: "pending", priority: "high" },
{ id: "4", content: "Implement core", status: "pending", priority: "high" },
{ id: "5", content: "Build features", status: "pending", priority: "medium" },
{ id: "6", content: "Write tests", status: "pending", priority: "medium" },
{ id: "7", content: "Add monitoring", status: "pending", priority: "medium" },
{ id: "8", content: "Documentation", status: "pending", priority: "low" },
{ id: "9", content: "Performance tuning", status: "pending", priority: "low" },
{ id: "10", content: "Deploy to production", status: "pending", priority: "high" }
]}
```
**Example of WRONG TodoWrite usage:**
```javascript
// β WRONG - Multiple TodoWrite calls
Message 1: TodoWrite { todos: [{ id: "1", content: "Task 1", ... }] }
Message 2: TodoWrite { todos: [{ id: "2", content: "Task 2", ... }] }
Message 3: TodoWrite { todos: [{ id: "3", content: "Task 3", ... }] }
// This breaks parallel coordination!
```
## π Gemini Flow v2.1.0 Features (NEW! Context7 Integration)
Gemini Flow v2.1.0 erweitert die Basis-Koordination um:
- **π Context7 Integration** - Real-time Library-Dokumentation mit MCP
- **π§ Smart Documentation** - Automatische Library-Erkennung und Context-Injection
- **β‘ Context Caching** - Hochperformantes Dokumentations-Cache-System
- **π GitHub Integration** - Deep repository management
- **π― Project Templates** - Quick-start for common projects
- **π Advanced Analytics** - Detailed performance insights
- **π€ Custom Agent Types** - Domain-specific coordinators
- **π Workflow Automation** - Reusable task sequences
- **π‘οΈ Enhanced Security** - Safer command execution
### π Context7 MCP-Tools (6 Neue Tools)
**Real-time Documentation Integration:**
```javascript
// Context7 MCP-Tools fΓΌr aktuellste Library-Dokumentation
mcp__gemini-flow__context7_search // Suche aktuelle Dokumentation
mcp__gemini-flow__context7_cache // Cache-Management fΓΌr Performance
mcp__gemini-flow__context7_integrate // Swarm-Context Integration
mcp__gemini-flow__context7_status // Status und Metriken
mcp__gemini-flow__context7_clear // Cache und State zurΓΌcksetzen
mcp__gemini-flow__context7_config // Konfiguration verwalten
```
### π Context7 Enhanced Workflows
**Automatische Library-Detection:**
```javascript
// Agent erkennt Libraries automatisch und holt Dokumentation
Task("Du bist coder agent. Implementiere JWT Auth fΓΌr Next.js. use context7")
// β Agent erhΓ€lt automatisch Next.js + JWT Dokumentation
// Multi-Library Support mit automatischer Context-Injection
mcp__gemini-flow__task_orchestrate({
task: "Full-stack: React + Prisma + Tailwind + TypeScript. use context7",
agents: ["frontend-dev", "backend-dev", "database-specialist"],
context7: true // Aktiviert automatische Dokumentations-Suche
})
```
**Context7-Enhanced Agent Workflows:**
```javascript
// Swarm mit shared Context fΓΌr alle Agents
mcp__gemini-flow__swarm_init({ topology: "hierarchical", maxAgents: 6 })
// Alle Agents nutzen automatisch Context7
Task("Research team: Analysiere React 18 Features. use context7")
Task("Coder team: Implementiere mit aktuellen React 18 APIs. use context7")
Task("Tester team: Teste React 18 Concurrent Features. use context7")
// β Alle Agents teilen sich die gefundene React 18 Dokumentation
```
### π Context7 Performance Boosts
**Performance-Verbesserungen mit Context7:**
- **Documentation Access**: +500% aktueller als statische Dokumentation
- **Agent Accuracy**: +40% durch up-to-date Library-Information
- **Development Speed**: +60% durch automatische Context-Injection
- **Error Reduction**: +50% durch korrekte API-Usage-Examples
- **Cache Hit Rate**: >80% fΓΌr hΓ€ufig verwendete Libraries
- **Context Sharing**: Swarm-weite Dokumentations-Synchronisation
### π§ Context7 Konfiguration
**gemini-flow.config.json erweitert:**
```json
{
"features": {
"context7Integration": true,
"smartDocumentation": true,
"contextCaching": true
},
"context7": {
"enabled": true,
"cacheTimeout": 3600,
"maxCacheSize": "100MB",
"libraryWhitelist": ["react", "nextjs", "typescript", "node", "express", "prisma"],
"autoContext": true,
"shareContextWithSwarm": true,
"autoDetectLibraries": true,
"maxAutoRequests": 10,
"cooldownMs": 5000
}
}
```
### π― Context7 Use-Cases
**Enhanced Multi-Agent Development:**
```javascript
// Context7 macht alle 54 Agent-Typen intelligenter:
// Backend Development mit Context7
Task("backend-dev: API mit Express + Prisma + JWT. use context7")
// β ErhΓ€lt aktuelle Express.js, Prisma, JWT Dokumentation
// Frontend Development mit Context7
Task("frontend-dev: React + Next.js + Tailwind UI. use context7")
// β ErhΓ€lt aktuelle React, Next.js, Tailwind Dokumentation
// Testing mit Context7
Task("tester: Jest + Testing Library + Playwright. use context7")
// β ErhΓ€lt aktuelle Testing-Framework Dokumentation
// Mobile Development mit Context7
Task("mobile-dev: React Native + Expo Router. use context7")
// β ErhΓ€lt aktuelle React Native + Expo Dokumentation
```
**Automatic Library Detection im Code:**
```javascript
// Context7 erkennt Libraries in Code automatisch:
const code = `
import React, { useState } from 'react';
import { PrismaClient } from '@prisma/client';
import express from 'express';
`;
// Context7 erkennt: react, prisma, express
// Holt automatisch Dokumentation fΓΌr alle erkannten Libraries
// Teilt Context mit allen Agents im Swarm
```
## Support
- Documentation: https://github.com/ruvnet/gemini-flow
- Issues: https://github.com/ruvnet/gemini-flow/issues
- Examples: https://github.com/ruvnet/gemini-flow/tree/main/examples
---
Remember: **Gemini Flow coordinates, Gemini Code creates!** Start with `mcp__gemini-flow__swarm_init` to enhance your development workflow.