mcp-subagents
Version:
Multi-Agent AI Orchestration via Model Context Protocol - Access specialized CLI AI agents (Aider, Qwen, Gemini, Goose, etc.) with intelligent fallback and configuration
1,108 lines (1,018 loc) • 48.5 kB
JavaScript
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { CallToolRequestSchema, ListToolsRequestSchema, ListPromptsRequestSchema, GetPromptRequestSchema, ListResourcesRequestSchema, ReadResourceRequestSchema } from '@modelcontextprotocol/sdk/types.js';
import { SERVER_IDENTITY } from './constants.js';
import { TaskConfig, AgentName } from './types/index.js';
export class AgentOrchestratorServer {
manager;
server;
constructor(manager) {
this.manager = manager;
// Set MCP mode to prevent stderr output
process.env['MCP_MODE'] = '1';
this.server = new Server({
name: SERVER_IDENTITY.mcpName,
version: SERVER_IDENTITY.mcpVersion
}, {
capabilities: {
tools: {},
prompts: {},
resources: {}
}
});
this.setupTools();
this.setupPrompts();
this.setupResources();
}
setupTools() {
// List available tools
this.server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: [
{
name: 'run_agent',
description: `Execute a CLI agent to perform a task with automatic fallback.
PURPOSE: Delegates complex tasks to specialized AI agents while minimizing context usage.
CRITICAL INFORMATION FOR AI AGENTS:
- Returns only LAST 10 LINES of output (to preserve your context)
- Full output is stored and accessible via get_task_status
- Returns immediately with taskId for async operations
- Supports automatic fallback if primary agent fails
TASK INPUT GUIDELINES:
- DIRECT TEXT: Use for SHORT tasks (≤200 chars or ≤3 lines)
Examples: "Fix the bug in auth.py", "Add type hints", "Refactor the login function"
- FILE-BASED: REQUIRED for longer/complex tasks
Examples: Multi-step instructions, code snippets, detailed requirements
FILE-BASED TASKS (PREVENT TERMINAL FLOODING):
For tasks >200 chars, >3 lines, or with special characters:
1. Write task content to a temporary file in /tmp/ directory
2. Pass file:// followed by the ABSOLUTE path as the task parameter
3. Server reads file content and IMMEDIATELY DELETES the temp file
4. This keeps tool calls clean and prevents terminal display issues
IMPORTANT FILE-BASED TASK RULES:
- ONLY these directories allowed: /tmp/, /var/tmp/, or TMPDIR environment variable
- MUST use absolute paths (starting with /)
- File is deleted immediately after reading (plan accordingly)
- If file doesn't exist or can't be read, the task will fail with an error
RETURN VALUE:
{
success: boolean, // Did the task complete successfully?
taskId: string, // Use this to retrieve full output later
agent: string, // Requested agent
executedBy: string, // Agent that actually ran (may differ due to fallback)
fallbackUsed: boolean, // Was a fallback agent used?
attempts: Array<{ // History of execution attempts
agent: string,
status: 'success' | 'failed' | 'busy' | 'unavailable',
error?: string
}>,
output: string[] // ONLY last 10 lines - use get_task_status for more
}
BEST PRACTICES:
1. Check 'success' first - if false, examine 'attempts' for errors
2. Use 'output' for quick validation (e.g., "File created", "Tests passed")
3. For detailed results, always call get_task_status with the taskId
4. Consider fallbackAgents parameter for critical tasks
5. Use 'files' parameter to restrict scope for safety
6. Don't confuse taskId (returned result) with file:// paths (input parameter)
COMMON PITFALLS TO AVOID:
- Don't assume output contains everything - it's truncated!
- Don't parse complex results from output - get full output first
- Check executedBy !== agent to know if fallback was used
- Large tasks may take time - use get_task_status to poll progress
- Don't confuse: taskId (e.g., "task_123_abc") vs file path (e.g., "file:///tmp/task.txt")
EXAMPLES:
// Direct text (SHORT tasks ≤200 chars, ≤3 lines)
- run_agent({ agent: "aider", task: "Fix the bug in auth.py", files: ["auth.py"] })
- run_agent({ agent: "qwen", task: "Add type hints to all functions" })
- run_agent({ agent: "gemini", task: "Write unit tests for the User class" })
// File-based (LONG/COMPLEX tasks >200 chars or >3 lines)
- Write to /tmp/complex-task-1753854599841.txt:
"Analyze the entire codebase and:
1. Document all API endpoints
2. Create sequence diagrams for auth flow
3. Identify performance bottlenecks
4. Suggest refactoring opportunities..."
- run_agent({ agent: "claude", task: "file:///tmp/complex-task-1753854599841.txt" })
Available agents: qwen, gemini, aider, goose, codex, opencode, claude`,
inputSchema: {
type: 'object',
properties: {
agent: {
type: 'string',
enum: ['qwen', 'gemini', 'aider', 'goose', 'codex', 'opencode', 'claude'],
description: 'The agent to use'
},
task: {
type: 'string',
description: 'The task/prompt for the agent. KEEP SHORT for direct text (≤200 chars or ≤3 lines). For longer tasks use: file:///absolute/path/to/temp/file.txt. Direct text best for: "Fix bug in auth.py", "Add error handling". File-based required for: multi-paragraph instructions, special chars, code examples. Files must be in /tmp/, /var/tmp/, or TMPDIR and are deleted immediately.'
},
model: {
type: 'string',
description: 'Optional: Model to use (if supported by agent)'
},
files: {
type: 'array',
items: { type: 'string' },
description: 'Optional: Files to restrict the agent to'
},
sandbox: {
type: 'boolean',
description: 'Optional: Run in sandbox mode (if supported)'
},
workingDirectory: {
type: 'string',
description: 'Optional: Working directory for the agent'
},
env: {
type: 'object',
additionalProperties: { type: 'string' },
description: 'Optional: Additional environment variables to pass to the agent (e.g., {"AIDER_AUTO_COMMITS": "true", "DEBUG": "1"})'
},
flags: {
type: 'array',
items: { type: 'string' },
description: 'Optional: Additional command-line flags to pass to the agent (e.g., ["--yes-always", "--model", "gpt-4o", "--auto-commits"])'
},
fallbackAgents: {
type: 'array',
items: {
type: 'string',
enum: ['qwen', 'gemini', 'aider', 'goose', 'codex', 'opencode', 'claude']
},
description: 'Optional: Agents to try if primary fails'
},
async: {
type: 'boolean',
description: 'Optional: Run task asynchronously (returns taskId immediately, task runs in background)'
}
},
required: ['agent', 'task']
}
},
{
name: 'list_agents',
description: `Get the status of all available CLI agents.
PURPOSE: Check agent availability before running tasks and diagnose issues.
CRITICAL INFORMATION FOR AI AGENTS:
- Call this BEFORE run_agent to verify agent is ready
- Shows real-time availability and workload
- Helps you choose the best agent for your task
- Essential for debugging failed tasks
RETURN VALUE:
{
agents: Array<{
name: string, // Agent identifier (use in run_agent)
available: boolean, // Can this agent be used?
status: string, // 'ready' | 'busy' | 'needs_auth' | 'not_installed'
version?: string, // Agent version if available
currentTasks: number, // How many tasks is it running?
maxConcurrent: number, // How many tasks can it handle?
authInstructions?: string // What to do if auth needed
}>
}
AGENT CAPABILITIES:
- qwen: Best for code generation, refactoring, analysis
- aider: Best for file editing, bug fixes, git integration
- gemini: Best for documentation, explanations, reviews
- goose: Best for exploration, automation, project understanding
- claude: Best for complex reasoning, architecture decisions
- codex: Best for debugging, system tasks
- opencode: General purpose, multimodal support
BEST PRACTICES:
1. Always check before critical tasks
2. Use agents with currentTasks < maxConcurrent for faster response
3. Have fallback plans for agents showing 'needs_auth'
4. Prefer 'ready' agents over those needing setup
COMMON PATTERNS:
- Filter for available agents: agents.filter(a => a.available && a.status === 'ready')
- Check specific agent: agents.find(a => a.name === 'aider')?.available
- Find least busy: agents.sort((a, b) => a.currentTasks - b.currentTasks)[0]`,
inputSchema: {
type: 'object',
properties: {},
required: []
}
},
{
name: 'get_task_status',
description: `Check the progress and output of a running or completed task.
PURPOSE: Retrieve full output and monitor task progress with fine-grained control.
CRITICAL INFORMATION FOR AI AGENTS:
- This is how you get FULL OUTPUT (run_agent only gives last 10 lines)
- Default returns last 20 lines - perfect for most checks
- Supports pagination for large outputs
- Can poll running tasks to monitor progress
- Essential for debugging failures
RETURN VALUE:
{
taskId: string,
status: 'queued' | 'running' | 'completed' | 'failed' | 'cancelled',
agent: string, // Which agent ran this
startTime?: number, // Unix timestamp
endTime?: number, // Unix timestamp (if finished)
duration?: number, // Milliseconds (if finished)
output: string[], // Requested portion of output (see outputOptions)
error?: string, // Error message if failed
progress?: number // 0-100 percentage
}
OUTPUT OPTIONS (all optional):
{
offset?: number, // Start from line N (0-based)
limit?: number, // Max lines to return (default: 20)
fromEnd?: boolean, // Count offset from end (default: true)
maxChars?: number // Truncate if exceeds character limit
}
SMART DEFAULTS:
- No options = last 20 lines (usually what you want)
- Just limit = last N lines
- offset + limit = specific range
- fromEnd: false + offset = lines from beginning
BEST PRACTICES:
1. First call with no options to see recent output
2. If truncated, increase limit or use offset to paginate
3. For errors, check both 'error' field and last output lines
4. Poll running tasks every few seconds for progress
5. Use maxChars to prevent context overflow on huge outputs
COMMON PATTERNS:
// Quick check of result
get_task_status({ taskId: "task_123" })
// Get everything (careful with context!)
get_task_status({ taskId: "task_123", outputOptions: { limit: 1000 } })
// Debug error - get last 50 lines
get_task_status({ taskId: "task_123", outputOptions: { limit: 50 } })
// Page through large output
get_task_status({ taskId: "task_123", outputOptions: { offset: 0, limit: 100, fromEnd: false } })
get_task_status({ taskId: "task_123", outputOptions: { offset: 100, limit: 100, fromEnd: false } })
// SEARCH FUNCTIONALITY - Find specific content in output
// Search for errors with context
get_task_status({ taskId: "task_123", outputOptions: { search: "error|failed|exception", context: 2 } })
// Find modified files (just the matches)
get_task_status({ taskId: "task_123", outputOptions: { search: "src/.*\\.ts", outputMode: "matches_only" } })
// Count successful operations
get_task_status({ taskId: "task_123", outputOptions: { search: "✓|passed|success", outputMode: "count" } })
// Case-insensitive search with line numbers
get_task_status({ taskId: "task_123", outputOptions: { search: "TODO", ignoreCase: true, matchNumbers: true } })
// Complex search with all options
get_task_status({
taskId: "task_123",
outputOptions: {
search: "commit [a-f0-9]{7}",
context: 1,
outputMode: "content",
ignoreCase: false,
matchNumbers: true,
limit: 10
}
})
// Monitor progress
while (status.status === 'running') {
await sleep(3000);
status = await get_task_status({ taskId });
}`,
inputSchema: {
type: 'object',
properties: {
taskId: {
type: 'string',
description: 'The task ID to check'
},
outputOptions: {
type: 'object',
properties: {
offset: {
type: 'number',
description: 'Starting line number (0-based)'
},
limit: {
type: 'number',
description: 'Maximum number of lines to return'
},
fromEnd: {
type: 'boolean',
description: 'If true, offset from the end of output instead of beginning'
},
maxChars: {
type: 'number',
description: 'Maximum total characters to return'
},
search: {
type: 'string',
description: 'Regex pattern to search for in output'
},
context: {
type: 'number',
description: 'Number of lines before/after each match to include (0-10)'
},
outputMode: {
type: 'string',
enum: ['content', 'matches_only', 'count'],
description: 'How to return search results: content (full lines), matches_only (just matching portions), count (just numbers)'
},
ignoreCase: {
type: 'boolean',
description: 'Case insensitive search'
},
matchNumbers: {
type: 'boolean',
description: 'Include line numbers in search results'
}
},
description: 'Control how much output to retrieve and search within it'
}
},
required: ['taskId']
}
},
{
name: 'list_running_tasks',
description: `List all currently running tasks with their status and metadata.
PURPOSE: Monitor and manage multiple running agents for better orchestration.
CRITICAL INFORMATION FOR AI AGENTS:
- Shows real-time status of all active tasks
- Includes runtime, PID, output line count, and last activity
- Sort by runtime to find long-running or stuck tasks
- Use includeOutput to see recent progress
RETURN VALUE:
{
tasks: Array<{
taskId: string, // Use with get_task_status or terminate_task
agent: string, // Which agent is running
status: 'running', // Always 'running' for this list
startTime: number, // Unix timestamp when started
runtime: number, // Milliseconds running
pid?: number, // Process ID (if available)
outputLines: number, // Total lines of output generated
lastActivity: number, // Unix timestamp of last output
taskPreview: string, // First 50 chars of task description
currentOutput?: string[] // Last 5 lines if includeOutput=true
}>
}
BEST PRACTICES:
1. Use to check for stuck tasks (long runtime, no recent activity)
2. Sort by 'runtime' to find longest-running tasks first
3. Use includeOutput to quickly see what each task is doing
4. Check lastActivity to identify tasks that may be hung
EXAMPLES:
// Basic list of running tasks
list_running_tasks()
// Detailed view with recent output
list_running_tasks({ includeOutput: true })
// Find longest-running tasks
list_running_tasks({ sortBy: 'runtime' })
// Monitor task activity
list_running_tasks({ sortBy: 'agent', includeOutput: true })`,
inputSchema: {
type: 'object',
properties: {
includeOutput: {
type: 'boolean',
description: 'Include last 5 lines of output for each task'
},
sortBy: {
type: 'string',
enum: ['startTime', 'agent', 'runtime'],
description: 'Sort tasks by: startTime (oldest first), agent (alphabetical), runtime (longest first)'
}
},
additionalProperties: false
}
},
{
name: 'terminate_task',
description: `Terminate a running task gracefully or forcefully.
PURPOSE: Stop long-running, stuck, or unwanted tasks to free resources.
CRITICAL INFORMATION FOR AI AGENTS:
- Terminates the process and all its children
- Graceful termination tries SIGTERM first, then SIGKILL if needed
- Task status changes to 'cancelled'
- Process is cleaned up from system resources
RETURN VALUE:
{
success: boolean, // Was termination successful?
previousStatus: string, // Status before termination attempt
finalStatus: string, // Status after termination ('cancelled' if successful)
runtime?: number, // How long the task ran (milliseconds)
outputLines: number, // Total output lines before termination
message: string // Human-readable result message
}
BEST PRACTICES:
1. Use list_running_tasks first to identify tasks to terminate
2. Graceful termination is default and recommended
3. Check success field - false means task wasn't running or couldn't be killed
4. Terminated tasks can still be queried with get_task_status
EXAMPLES:
// Graceful termination (recommended)
terminate_task({ taskId: "task_123" })
// Force termination (if graceful fails)
terminate_task({ taskId: "task_123", graceful: false })
// Custom timeout for graceful shutdown
terminate_task({ taskId: "task_123", timeout: 10000 })`,
inputSchema: {
type: 'object',
properties: {
taskId: {
type: 'string',
description: 'ID of the task to terminate'
},
graceful: {
type: 'boolean',
description: 'Try graceful shutdown first (SIGTERM) before force kill (default: true)'
},
timeout: {
type: 'number',
description: 'Milliseconds to wait for graceful shutdown before force kill (default: 5000)'
}
},
required: ['taskId'],
additionalProperties: false
}
},
{
name: 'run_agent_sequence',
description: `Execute multiple agents in a workflow sequence with dependencies and result passing.
PURPOSE: Chain multiple tasks together where later tasks can use results from earlier ones.
CRITICAL INFORMATION FOR AI AGENTS:
- Tasks execute in order, respecting dependencies
- Use {{dependencyId.output}} to pass results between tasks
- Returns immediately with workflowId for tracking
- Individual task results accessible via their taskIds
WORKFLOW EXECUTION:
1. Tasks are executed in the order specified
2. Dependencies are checked before execution
3. Template variables are substituted in task descriptions
4. Failed tasks can stop the workflow (unless continueOnFailure=true)
5. Skipped tasks don't count as failures
RETURN VALUE:
{
success: boolean, // Did the workflow complete successfully?
workflowId: string, // Use this to track the workflow
completedTasks: number, // How many tasks completed successfully
totalTasks: number, // Total number of tasks in workflow
tasks: Array<{ // Status of each task
id: string,
agent: string,
taskId?: string, // Individual task ID for detailed results
status: 'pending' | 'running' | 'completed' | 'failed' | 'skipped',
startTime?: number,
endTime?: number,
error?: string
}>,
output: string[] // Combined output from all tasks (last 10 lines)
}
EXAMPLES:
// Short tasks (direct text) in workflow
run_agent_sequence({
tasks: [
{ id: "analyze", agent: "claude", task: "Find security issues in auth.py" }, // 34 chars
{ id: "fix", agent: "aider", task: "Fix these issues: {{analyze.output}}", dependsOn: "analyze" },
{ id: "test", agent: "qwen", task: "Run security tests", dependsOn: "fix" } // 18 chars
]
})
// Mix of short and long tasks
run_agent_sequence({
tasks: [
{ id: "specs", agent: "claude", task: "file:///tmp/detailed-specs-500-lines.txt" }, // Long task
{ id: "implement", agent: "aider", task: "Implement based on: {{specs.output}}", dependsOn: "specs" },
{ id: "review", agent: "gemini", task: "Review the implementation", dependsOn: "implement" } // Short
],
continueOnFailure: true
})
Available agents: qwen, gemini, aider, goose, codex, opencode, claude`,
inputSchema: {
type: 'object',
properties: {
tasks: {
type: 'array',
items: {
type: 'object',
properties: {
id: {
type: 'string',
description: 'Unique identifier for this task'
},
agent: {
type: 'string',
enum: ['qwen', 'gemini', 'aider', 'goose', 'codex', 'opencode', 'claude'],
description: 'Agent to execute this task'
},
task: {
type: 'string',
description: 'Task description (supports {{dependency.output}} templates). KEEP SHORT for direct text (≤200 chars or ≤3 lines). For longer tasks use: file:///absolute/path/to/temp/file.txt. Files must be in /tmp/, /var/tmp/, or TMPDIR and are deleted immediately.'
},
dependsOn: {
type: 'string',
description: 'ID of task this depends on'
},
condition: {
type: 'string',
enum: ['success', 'completed', 'always'],
description: 'When to execute this task'
},
config: {
type: 'object',
description: 'Task configuration (same as run_agent config)'
}
},
required: ['id', 'agent', 'task'],
additionalProperties: false
},
minItems: 1,
description: 'Array of tasks to execute in sequence'
},
continueOnFailure: {
type: 'boolean',
description: 'Continue workflow even if a task fails'
},
templateData: {
type: 'object',
description: 'Additional template variables for task substitution'
}
},
required: ['tasks'],
additionalProperties: false
}
},
{
name: 'run_agents_parallel',
description: `Execute multiple agents simultaneously for independent tasks.
PURPOSE: Run multiple tasks in parallel when they don't depend on each other for maximum efficiency.
CRITICAL INFORMATION FOR AI AGENTS:
- All tasks start simultaneously, not sequentially
- Returns immediately with parallelId for tracking
- Individual task results accessible via their taskIds
- Much faster than run_agent_sequence for independent tasks
- Supports file-based tasks: use file:///tmp/task.txt for long/complex instructions
PARALLEL EXECUTION:
1. All tasks begin execution at the same time
2. No dependency checking or result passing
3. Tasks complete independently of each other
4. Partial success is possible (some tasks succeed, others fail)
5. Optional timeout and fail-fast modes available
RETURN VALUE:
{
success: boolean, // Did any tasks complete successfully?
parallelId: string, // Use this to track the parallel execution
completedTasks: number, // How many tasks completed successfully
totalTasks: number, // Total number of tasks executed
duration?: number, // Total execution time in milliseconds
tasks: { // Status of each named task
[taskName]: {
agent: string,
taskId?: string, // Individual task ID for detailed results
status: 'running' | 'completed' | 'failed' | 'cancelled',
startTime?: number,
endTime?: number,
error?: string
}
},
output: string[] // Combined output from all successful tasks
}
EXAMPLES:
// Run independent checks in parallel
run_agents_parallel({
tasks: {
"lint": { agent: "qwen", task: "Run linting on all files" },
"tests": { agent: "aider", task: "Run unit tests" },
"security": { agent: "claude", task: "Security analysis of the codebase" },
"docs": { agent: "gemini", task: "Review README for clarity" }
}
})
// With timeout and fail-fast
run_agents_parallel({
tasks: {
"build": { agent: "qwen", task: "Build the project" },
"typecheck": { agent: "aider", task: "Run TypeScript type checking" }
},
timeout: 120000, // 2 minutes max
failFast: true // Stop all if any fails
})
// Mix of direct (short) and file-based (long) tasks
run_agents_parallel({
tasks: {
"analysis": { agent: "claude", task: "file:///tmp/security-audit-300-lines.txt" }, // Long task
"simple_check": { agent: "qwen", task: "Check for syntax errors" } // Short task
}
})
Available agents: qwen, gemini, aider, goose, codex, opencode, claude`,
inputSchema: {
type: 'object',
properties: {
tasks: {
type: 'object',
additionalProperties: {
type: 'object',
properties: {
agent: {
type: 'string',
enum: ['qwen', 'gemini', 'aider', 'goose', 'codex', 'opencode', 'claude'],
description: 'Agent to execute this task'
},
task: {
type: 'string',
description: 'Task description. KEEP SHORT for direct text (≤200 chars or ≤3 lines). For longer tasks use: file:///absolute/path/to/temp/file.txt. Files must be in /tmp/, /var/tmp/, or TMPDIR and are deleted immediately.'
},
config: {
type: 'object',
description: 'Task configuration (same as run_agent config)'
}
},
required: ['agent', 'task'],
additionalProperties: false
},
description: 'Named tasks to execute in parallel'
},
timeout: {
type: 'number',
minimum: 1000,
maximum: 600000,
description: 'Timeout in milliseconds for all tasks'
},
failFast: {
type: 'boolean',
description: 'Stop all tasks if any fails'
}
},
required: ['tasks'],
additionalProperties: false
}
}
]
};
});
// Handle tool execution
this.server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
switch (name) {
case 'run_agent': {
if (!args || typeof args !== 'object') {
throw new Error('Invalid arguments');
}
// Extract agent, task, and async from args
const { agent, task, async, ...configProps } = args;
if (typeof agent !== 'string' || typeof task !== 'string') {
throw new Error('agent and task are required strings');
}
// Validate agent name
const validatedAgent = AgentName.parse(agent);
// Parse config if provided
const config = Object.keys(configProps).length > 0 ? TaskConfig.parse(configProps) : undefined;
const isAsync = async === true;
if (isAsync) {
// Run asynchronously - return taskId immediately
const result = await this.manager.runAgentAsync(config
? { agent: validatedAgent, task, config }
: { agent: validatedAgent, task });
return {
content: [
{
type: 'text',
text: JSON.stringify(result, null, 2)
}
]
};
}
else {
// Run synchronously - wait for completion
const result = await this.manager.runAgent(config
? { agent: validatedAgent, task, config }
: { agent: validatedAgent, task });
return {
content: [
{
type: 'text',
text: JSON.stringify(result, null, 2)
}
]
};
}
}
case 'list_agents': {
const result = await this.manager.listAgents();
return {
content: [
{
type: 'text',
text: JSON.stringify(result, null, 2)
}
]
};
}
case 'get_task_status': {
if (!args || typeof args !== 'object' || !('taskId' in args)) {
throw new Error('taskId is required');
}
const taskId = args['taskId'];
const outputOptions = args['outputOptions'];
const result = await this.manager.getTaskStatus(taskId, outputOptions);
return {
content: [
{
type: 'text',
text: JSON.stringify(result, null, 2)
}
]
};
}
case 'list_running_tasks': {
const options = args;
const result = await this.manager.listRunningTasks(options);
return {
content: [
{
type: 'text',
text: JSON.stringify(result, null, 2)
}
]
};
}
case 'terminate_task': {
if (!args || typeof args !== 'object' || !('taskId' in args)) {
throw new Error('taskId is required');
}
const taskId = args['taskId'];
const options = {};
if ('graceful' in args && args['graceful'] !== undefined) {
options.graceful = args['graceful'];
}
if ('timeout' in args && args['timeout'] !== undefined) {
options.timeout = args['timeout'];
}
const result = await this.manager.terminateTask(taskId, options);
return {
content: [
{
type: 'text',
text: JSON.stringify(result, null, 2)
}
]
};
}
case 'run_agent_sequence': {
if (!args || typeof args !== 'object' || !('tasks' in args)) {
throw new Error('tasks array is required');
}
const tasks = args['tasks'];
const options = {};
if (args['continueOnFailure'] !== undefined) {
options.continueOnFailure = args['continueOnFailure'];
}
if (args['templateData'] !== undefined) {
options.templateData = args['templateData'];
}
const result = await this.manager.runAgentSequence(tasks, options);
return {
content: [
{
type: 'text',
text: JSON.stringify(result, null, 2)
}
]
};
}
case 'run_agents_parallel': {
if (!args || typeof args !== 'object' || !('tasks' in args)) {
throw new Error('tasks object is required');
}
const tasks = args['tasks'];
const options = {};
if (args['timeout'] !== undefined) {
options.timeout = args['timeout'];
}
if (args['failFast'] !== undefined) {
options.failFast = args['failFast'];
}
const result = await this.manager.runAgentsParallel(tasks, options);
return {
content: [
{
type: 'text',
text: JSON.stringify(result, null, 2)
}
]
};
}
default:
throw new Error(`Unknown tool: ${name}`);
}
});
}
setupPrompts() {
this.server.setRequestHandler(ListPromptsRequestSchema, async () => {
return {
prompts: [
{
name: 'refactor_code',
description: 'Refactor code with AI assistance',
arguments: [
{
name: 'target',
description: 'What to refactor (e.g., "authentication system", "database queries")',
required: true
}
]
},
{
name: 'fix_and_test',
description: 'Fix code issues and add tests',
arguments: [
{
name: 'issue',
description: 'The issue to fix',
required: true
},
{
name: 'files',
description: 'Files involved (comma-separated)',
required: false
}
]
}
]
};
});
this.server.setRequestHandler(GetPromptRequestSchema, async (request) => {
if (request.params.name === 'refactor_code') {
const target = request.params.arguments?.['target'] || 'code';
return {
description: 'Refactor code with AI assistance',
messages: [
{
role: 'user',
content: {
type: 'text',
text: `I'll help you refactor the ${target}. First, let me analyze the current implementation and suggest improvements.
Suggested workflow:
1. await run_agent({ agent: "goose", task: "Analyze the current ${target} implementation" })
2. await run_agent({ agent: "qwen", task: "Refactor the ${target} based on analysis" })`
}
}
]
};
}
if (request.params.name === 'fix_and_test') {
const issue = request.params.arguments?.['issue'] || 'issue';
const files = request.params.arguments?.['files'];
return {
description: 'Fix code issues and add tests',
messages: [
{
role: 'user',
content: {
type: 'text',
text: `I'll fix the issue: "${issue}" and add appropriate tests.
Suggested workflow:
1. await run_agent({ agent: "aider", task: "Fix: ${issue}", files: ${files ? JSON.stringify(files.split(',')) : '[]'} })
2. await run_agent({ agent: "aider", task: "Add tests for the fix" })`
}
}
]
};
}
throw new Error(`Unknown prompt: ${request.params.name}`);
});
}
setupResources() {
this.server.setRequestHandler(ListResourcesRequestSchema, async () => {
return {
resources: [
{
uri: 'help',
name: 'Agent Orchestrator Usage Guide',
description: 'Complete guide on using the Agent Orchestrator',
mimeType: 'text/markdown'
}
]
};
});
this.server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
if (request.params.uri === 'help') {
return {
contents: [
{
uri: 'help',
mimeType: 'text/markdown',
text: this.getHelpContent()
}
]
};
}
throw new Error(`Unknown resource: ${request.params.uri}`);
});
}
getHelpContent() {
return `# 🎭 Agent Orchestrator Usage Guide
## Quick Start
The Agent Orchestrator helps you coordinate multiple CLI agents. Here's how to use it:
### Basic Usage
\`\`\`typescript
// Run a single agent
await run_agent({
agent: "qwen",
task: "Add error handling to the login function"
});
// Check available agents
const agents = await list_agents();
\`\`\`
### Available Agents
- **qwen**: Best for complex coding tasks, refactoring, architecture
- **aider**: Best for file editing, bug fixes, adding features
- **goose**: Best for code analysis, exploration, counting
- **codex**: Best for debugging, system tasks
- **opencode**: General purpose development
- **gemini**: Research and documentation
### Common Workflows
#### 1. Refactoring Code
\`\`\`typescript
// Analyze then refactor
await run_agent({ agent: "goose", task: "Analyze authentication system for issues" });
await run_agent({ agent: "qwen", task: "Refactor based on analysis" });
\`\`\`
#### 2. Fixing Bugs
\`\`\`typescript
await run_agent({
agent: "aider",
task: "Fix the null pointer exception in user.py",
files: ["src/user.py"],
model: "gpt-4o"
});
\`\`\`
#### 3. Adding Features
\`\`\`typescript
// Design, implement, test
await run_agent({ agent: "qwen", task: "Design API for new feature" });
await run_agent({ agent: "aider", task: "Implement the API endpoints" });
await run_agent({ agent: "aider", task: "Add tests for new endpoints" });
\`\`\`
### Configuration Options
- **model**: Override the default model for an agent
- **files**: Specify files for aider to edit
- **timeout**: Set custom timeout (milliseconds)
- **env**: Custom environment variables for the agent
- **allowFallback**: Enable/disable automatic fallback
- **fallbackAgents**: Specify custom fallback order
### Environment Variable Configuration
You can set environment variables in three ways:
#### 1. MCP Server Level (Global)
Set in your MCP configuration - applies to all agents:
\`\`\`json
{
"mcpServers": {
"agent-orchestrator": {
"command": "npx",
"args": ["mcp-agent-orchestrator@latest"],
"env": {
"OPENAI_API_KEY": "sk-your-key",
"AIDER_AUTO_COMMITS": "true",
"DEBUG": "1"
}
}
}
}
\`\`\`
#### 2. Per-Task Level (Override)
Pass custom env vars for specific tasks:
\`\`\`typescript
await run_agent({
agent: "aider",
task: "Fix the authentication bug",
env: {
"AIDER_MODEL": "gpt-4o",
"AIDER_DARK_MODE": "true",
"CUSTOM_API_ENDPOINT": "https://my-api.com"
}
});
\`\`\`
#### 3. Common Environment Variables
- **OPENAI_API_KEY**: Required for qwen, aider
- **ANTHROPIC_API_KEY**: Required for aider (Claude models)
- **AIDER_MODEL**: Override aider's default model
- **AIDER_AUTO_COMMITS**: Enable automatic git commits
- **GOOGLE_API_KEY**: For agents using Google services
- And many more agent-specific vars...
### Checking Task Status
\`\`\`typescript
const result = await run_agent({
agent: "qwen",
task: "Complex refactoring task"
});
// Check progress
const status = await get_task_status({
taskId: result.taskId
});
console.log(\`Status: \${status.status}\`);
console.log(\`Progress: \${status.progress}%\`);
\`\`\`
### Troubleshooting
1. **Agent not available**: Run \`npx mcp-agent-orchestrator --validate\` to check setup
2. **Authentication errors**: Some agents need setup (codex login, opencode auth login)
3. **Fallback happening**: Check agent status with list_agents()
For more help, visit: https://github.com/username/mcp-agent-orchestrator
`;
}
async startStdio() {
const transport = new StdioServerTransport();
await this.server.connect(transport);
}
}
//# sourceMappingURL=server.js.map