a-teams
Version:
a-teams by Worksona - worksona agents and agentic teams in claude.ai. Enterprise-grade multi-agent workflow system with 60+ specialized agents, comprehensive template system, and advanced orchestration capabilities for business, technical, and research ta
778 lines (634 loc) ⢠20.5 kB
Markdown
# š Complete Deployment Package
## š¦ Package Contents
This comprehensive package provides everything needed to transform the Claude Sub-Agents system into a professional consultancy platform with specialized teams, semantic search, and enterprise-grade deployment capabilities.
### Core Components
```
consultancy-deployment/
āāā š consultancy-guide.html # Complete HTML customization guide
āāā šļø agent-curation-system.md # System architecture & implementation
āāā š web-interface/
ā āāā index.html # Semantic search interface
āāā ā” cli/
ā āāā agentctl.js # CLI tool for agent management
āāā š¤ consultancy-agents/
ā āāā research/ # 5 research service agents
ā āāā development/ # 5 development service agents
ā āāā strategy/ # 5 strategic development agents
ā āāā competitive/ # 5 competitive analysis agents
ā āāā ai-consulting/ # 5 AI consulting agents
āāā š agent-index.md # Complete agent cross-reference
āāā šÆ deployment-package.md # This deployment guide
āāā š integration-scripts/ # Automation scripts
```
## šÆ Quick Start Deployment
### Option 1: Full Integration with Existing Repository
```bash
# Step 1: Clone the a-teams repository
git clone https://github.com/your-org/a-teams.git consultancy-platform
cd consultancy-platform
# Step 2: Add consultancy structure
mkdir -p consultancy-agents/{research,development,strategy,competitive,ai-consulting}
mkdir -p web-interface cli integration-scripts
# Step 3: Copy consultancy agents
cp -r /path/to/consultancy-agents/* consultancy-agents/
cp consultancy-guide.html ./
cp web-interface/index.html web-interface/
cp cli/agentctl.js cli/
cp agent-index.md ./
# Step 4: Install dependencies
npm init -y
npm install commander chalk ora inquirer fs-extra yaml node-fetch
# Step 5: Set up semantic search system
# (Follow detailed instructions in agent-curation-system.md)
# Step 6: Launch web interface
cd web-interface
python -m http.server 8080
echo "š Access at http://localhost:8080"
```
### Option 2: Standalone Consultancy Deployment
```bash
# Step 1: Create new consultancy platform
mkdir consultancy-platform
cd consultancy-platform
# Step 2: Copy complete package
cp -r /path/to/consultancy-deployment/* ./
# Step 3: Initialize system
npm install
chmod +x cli/agentctl.js
# Step 4: Configure for your organization
# Edit consultancy-guide.html with your branding
# Customize agent templates for your methodology
# Configure client templates and standards
# Step 5: Deploy web interface
cd web-interface
python -m http.server 8080
# Step 6: Test CLI tool
./cli/agentctl.js search "strategic planning"
```
## šļø System Architecture Implementation
### 1. Database Setup (PostgreSQL + pgvector)
```sql
-- Create database and enable vector extension
CREATE DATABASE agent_curation;
\c agent_curation;
CREATE EXTENSION vector;
-- Create core tables
CREATE TABLE agents (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name VARCHAR(255) UNIQUE NOT NULL,
description TEXT,
content TEXT NOT NULL,
service_area VARCHAR(100),
tools TEXT[],
capabilities TEXT[],
quality_score DECIMAL(5,2),
source_repo VARCHAR(255),
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
);
CREATE TABLE agent_embeddings (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
agent_id UUID REFERENCES agents(id) ON DELETE CASCADE,
embedding VECTOR(1536),
created_at TIMESTAMP DEFAULT NOW()
);
CREATE INDEX idx_agent_embeddings_vector
ON agent_embeddings USING ivfflat (embedding vector_cosine_ops);
-- Insert consultancy agents
INSERT INTO agents (name, description, service_area, capabilities, quality_score) VALUES
('market-researcher', 'Senior market research specialist for comprehensive market analysis', 'research', ARRAY['Market Analysis', 'Consumer Research', 'Competitive Intelligence'], 95),
('strategy-consultant', 'Senior strategy consultant specializing in digital transformation', 'strategy', ARRAY['Strategic Planning', 'Digital Transformation', 'Change Management'], 98),
('competitive-analyst', 'Strategic competitive intelligence specialist', 'competitive', ARRAY['Competitive Analysis', 'Market Intelligence', 'Strategic Positioning'], 97),
('ai-strategist', 'Senior AI strategy consultant for enterprise AI adoption', 'ai-consulting', ARRAY['AI Strategy', 'Technology Roadmaps', 'ML Implementation'], 98);
-- ... (continue for all 25 agents)
```
### 2. API Server Setup (Node.js + Express)
```javascript
// server.js
import express from 'express';
import { Pool } from 'pg';
import OpenAI from 'openai';
import cors from 'cors';
const app = express();
const port = 3000;
// Database connection
const pool = new Pool({
user: 'your_username',
host: 'localhost',
database: 'agent_curation',
password: 'your_password',
port: 5432,
});
// OpenAI for embeddings
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
app.use(cors());
app.use(express.json());
// Semantic search endpoint
app.post('/api/v1/search', async (req, res) => {
try {
const { query, filters = {}, limit = 10 } = req.body;
// Generate query embedding
const embedding = await openai.embeddings.create({
model: "text-embedding-ada-002",
input: query,
});
// Perform vector similarity search
const searchQuery = `
SELECT a.*, (e.embedding <=> $1::vector) as distance
FROM agents a
JOIN agent_embeddings e ON a.id = e.agent_id
WHERE ($2::text IS NULL OR a.service_area = $2)
AND ($3::int IS NULL OR a.quality_score >= $3)
ORDER BY e.embedding <=> $1::vector
LIMIT $4
`;
const result = await pool.query(searchQuery, [
JSON.stringify(embedding.data[0].embedding),
filters.service_area || null,
filters.quality_threshold || null,
limit
]);
const results = result.rows.map(row => ({
agent: {
id: row.id,
name: row.name,
description: row.description,
service_area: row.service_area,
capabilities: row.capabilities,
quality_score: row.quality_score
},
relevance_score: 1 - row.distance,
match_reasons: [`Query similarity: ${Math.round((1 - row.distance) * 100)}%`]
}));
res.json({
results,
total: results.length,
query_time: Date.now()
});
} catch (error) {
console.error('Search error:', error);
res.status(500).json({ error: 'Search failed' });
}
});
// Agent details endpoint
app.get('/api/v1/agents/:id', async (req, res) => {
try {
const { id } = req.params;
const result = await pool.query('SELECT * FROM agents WHERE id = $1', [id]);
if (result.rows.length === 0) {
return res.status(404).json({ error: 'Agent not found' });
}
res.json(result.rows[0]);
} catch (error) {
console.error('Agent fetch error:', error);
res.status(500).json({ error: 'Failed to fetch agent' });
}
});
app.listen(port, () => {
console.log(`š Agent Curation API running on port ${port}`);
});
```
### 3. Web Interface Deployment
```bash
# Install web server dependencies
cd web-interface
npm init -y
npm install express serve-static
# Create production server
cat > server.js << 'EOF'
const express = require('express');
const path = require('path');
const app = express();
app.use(express.static('.'));
app.get('*', (req, res) => {
res.sendFile(path.join(__dirname, 'index.html'));
});
const port = process.env.PORT || 8080;
app.listen(port, () => {
console.log(`š Web interface running on port ${port}`);
});
EOF
# Start production server
node server.js
```
## š§ Integration Scripts
### Agent Import Script
```bash
#!/bin/bash
# import-agents.sh
echo "š¤ Importing consultancy agents..."
API_URL="http://localhost:3000/api/v1"
AGENTS_DIR="consultancy-agents"
# Function to import agent
import_agent() {
local service_area=$1
local agent_file=$2
local agent_name=$(basename "$agent_file" .md)
echo "Importing $agent_name from $service_area..."
# Read agent content
local content=$(cat "$AGENTS_DIR/$service_area/$agent_file")
# Extract metadata
local description=$(echo "$content" | grep "description:" | cut -d: -f2- | xargs)
local tools=$(echo "$content" | grep "tools:" | cut -d: -f2- | xargs)
# Create agent record
curl -X POST "$API_URL/agents" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"$agent_name\",
\"description\": \"$description\",
\"content\": $(echo "$content" | jq -Rs .),
\"service_area\": \"$service_area\",
\"tools\": [\"$tools\"],
\"quality_score\": 95
}"
echo "ā Imported $agent_name"
}
# Import all agents
for service_area in research development strategy competitive ai-consulting; do
echo "š Processing $service_area agents..."
for agent_file in "$AGENTS_DIR/$service_area"/*.md; do
if [ -f "$agent_file" ]; then
import_agent "$service_area" "$(basename "$agent_file")"
fi
done
done
echo "š Agent import complete!"
```
### Embedding Generation Script
```python
#!/usr/bin/env python3
# generate-embeddings.py
import openai
import psycopg2
import json
import os
from typing import List
# Configuration
openai.api_key = os.getenv('OPENAI_API_KEY')
DB_CONFIG = {
'host': 'localhost',
'database': 'agent_curation',
'user': 'your_username',
'password': 'your_password'
}
def generate_embedding(text: str) -> List[float]:
"""Generate embedding for text using OpenAI"""
response = openai.Embedding.create(
model="text-embedding-ada-002",
input=text
)
return response['data'][0]['embedding']
def main():
# Connect to database
conn = psycopg2.connect(**DB_CONFIG)
cur = conn.cursor()
# Get all agents without embeddings
cur.execute("""
SELECT a.id, a.name, a.description, a.content
FROM agents a
LEFT JOIN agent_embeddings e ON a.id = e.agent_id
WHERE e.agent_id IS NULL
""")
agents = cur.fetchall()
print(f"š Generating embeddings for {len(agents)} agents...")
for agent_id, name, description, content in agents:
print(f"š Processing {name}...")
# Combine description and content for embedding
text_to_embed = f"{description}\n\n{content}"
try:
# Generate embedding
embedding = generate_embedding(text_to_embed)
# Store embedding
cur.execute("""
INSERT INTO agent_embeddings (agent_id, embedding)
VALUES (%s, %s)
""", (agent_id, json.dumps(embedding)))
print(f"ā
Generated embedding for {name}")
except Exception as e:
print(f"ā Error processing {name}: {e}")
continue
conn.commit()
cur.close()
conn.close()
print("š Embedding generation complete!")
if __name__ == "__main__":
main()
```
## š Performance Optimization
### 1. Vector Database Optimization
```sql
-- Optimize vector search performance
CREATE INDEX CONCURRENTLY idx_agents_service_area ON agents(service_area);
CREATE INDEX CONCURRENTLY idx_agents_quality_score ON agents(quality_score);
-- Create materialized view for fast searches
CREATE MATERIALIZED VIEW agent_search_view AS
SELECT
a.id,
a.name,
a.description,
a.service_area,
a.capabilities,
a.quality_score,
e.embedding
FROM agents a
JOIN agent_embeddings e ON a.id = e.agent_id;
CREATE INDEX ON agent_search_view USING ivfflat (embedding vector_cosine_ops);
-- Refresh materialized view procedure
CREATE OR REPLACE FUNCTION refresh_agent_search_view()
RETURNS void AS $$
BEGIN
REFRESH MATERIALIZED VIEW CONCURRENTLY agent_search_view;
END;
$$ LANGUAGE plpgsql;
```
### 2. Caching Strategy
```javascript
// redis-cache.js
import Redis from 'redis';
const redis = Redis.createClient({
host: 'localhost',
port: 6379
});
export class SearchCache {
static async get(query, filters) {
const key = `search:${JSON.stringify({query, filters})}`;
const cached = await redis.get(key);
return cached ? JSON.parse(cached) : null;
}
static async set(query, filters, results, ttl = 3600) {
const key = `search:${JSON.stringify({query, filters})}`;
await redis.setex(key, ttl, JSON.stringify(results));
}
static async invalidate(pattern = 'search:*') {
const keys = await redis.keys(pattern);
if (keys.length > 0) {
await redis.del(...keys);
}
}
}
```
## š Security Configuration
### 1. API Security
```javascript
// security.js
import helmet from 'helmet';
import rateLimit from 'express-rate-limit';
import jwt from 'jsonwebtoken';
// Security middleware
app.use(helmet());
// Rate limiting
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // limit each IP to 100 requests per windowMs
message: 'Too many requests from this IP'
});
app.use('/api/', limiter);
// JWT authentication
const authenticateToken = (req, res, next) => {
const authHeader = req.headers['authorization'];
const token = authHeader && authHeader.split(' ')[1];
if (!token) {
return res.sendStatus(401);
}
jwt.verify(token, process.env.JWT_SECRET, (err, user) => {
if (err) return res.sendStatus(403);
req.user = user;
next();
});
};
// Protect sensitive endpoints
app.use('/api/v1/admin', authenticateToken);
```
### 2. Environment Configuration
```bash
# .env
NODE_ENV=production
JWT_SECRET=your-super-secret-jwt-key
OPENAI_API_KEY=your-openai-api-key
DATABASE_URL=postgresql://username:password@localhost:5432/agent_curation
REDIS_URL=redis://localhost:6379
PORT=3000
# Database security
DB_HOST=localhost
DB_NAME=agent_curation
DB_USER=agent_curation_user
DB_PASS=secure-database-password
```
## š Monitoring & Analytics
### 1. Usage Analytics
```javascript
// analytics.js
import { Pool } from 'pg';
const analyticsPool = new Pool({ /* analytics DB config */ });
export class Analytics {
static async trackSearch(query, resultsCount, userId = null) {
await analyticsPool.query(`
INSERT INTO search_queries (query, results_count, user_id, timestamp)
VALUES ($1, $2, $3, NOW())
`, [query, resultsCount, userId]);
}
static async trackAgentView(agentId, userId = null) {
await analyticsPool.query(`
INSERT INTO agent_views (agent_id, user_id, timestamp)
VALUES ($1, $2, NOW())
`, [agentId, userId]);
}
static async getPopularAgents(timeframe = '7 days') {
const result = await analyticsPool.query(`
SELECT a.name, COUNT(v.agent_id) as view_count
FROM agent_views v
JOIN agents a ON v.agent_id = a.id
WHERE v.timestamp > NOW() - INTERVAL '${timeframe}'
GROUP BY a.id, a.name
ORDER BY view_count DESC
LIMIT 10
`);
return result.rows;
}
}
```
### 2. Performance Monitoring
```javascript
// monitoring.js
import prometheus from 'prom-client';
const register = new prometheus.Registry();
// Metrics
const httpDuration = new prometheus.Histogram({
name: 'http_request_duration_seconds',
help: 'Duration of HTTP requests in seconds',
labelNames: ['method', 'route', 'status_code'],
buckets: [0.1, 0.5, 1, 2, 5]
});
const searchLatency = new prometheus.Histogram({
name: 'search_latency_seconds',
help: 'Search operation latency',
buckets: [0.1, 0.5, 1, 2, 5, 10]
});
register.registerMetric(httpDuration);
register.registerMetric(searchLatency);
// Metrics endpoint
app.get('/metrics', async (req, res) => {
res.set('Content-Type', register.contentType);
res.end(await register.metrics());
});
```
## š Production Deployment
### 1. Docker Configuration
```dockerfile
# Dockerfile
FROM node:18-alpine
WORKDIR /app
# Install dependencies
COPY package*.json ./
RUN npm ci --only=production
# Copy application
COPY . .
# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
USER nextjs
EXPOSE 3000
CMD ["node", "server.js"]
```
```yaml
# docker-compose.yml
version: '3.8'
services:
api:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://user:pass@db:5432/agent_curation
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
db:
image: pgvector/pgvector:pg15
environment:
POSTGRES_DB: agent_curation
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: redis:7-alpine
ports:
- "6379:6379"
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./ssl:/etc/ssl
depends_on:
- api
volumes:
postgres_data:
```
### 2. Kubernetes Deployment
```yaml
# k8s-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: agent-curation-api
spec:
replicas: 3
selector:
matchLabels:
app: agent-curation-api
template:
metadata:
labels:
app: agent-curation-api
spec:
containers:
- name: api
image: your-registry/agent-curation-api:latest
ports:
- containerPort: 3000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database-url
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: app-secrets
key: openai-api-key
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: agent-curation-service
spec:
selector:
app: agent-curation-api
ports:
- port: 80
targetPort: 3000
type: LoadBalancer
```
## š Final Checklist
### ā
Pre-Deployment
- [ ] All 25 consultancy agents tested and validated
- [ ] Database schema created and optimized
- [ ] API endpoints functional with proper error handling
- [ ] Web interface responsive and user-friendly
- [ ] CLI tool installed and operational
- [ ] Security measures implemented
- [ ] Environment variables configured
- [ ] SSL certificates installed
- [ ] Monitoring and logging setup
### ā
Go-Live
- [ ] Database seeded with agent data
- [ ] Embeddings generated for all agents
- [ ] Load balancer configured
- [ ] CDN setup for static assets
- [ ] Backup and recovery procedures tested
- [ ] Performance benchmarks established
- [ ] User access and permissions configured
- [ ] Documentation updated and accessible
### ā
Post-Deployment
- [ ] Health checks passing
- [ ] Metrics collection active
- [ ] User training materials prepared
- [ ] Support procedures documented
- [ ] Feedback collection system active
- [ ] Continuous improvement process established
## š Success Metrics
### Technical Metrics
- **Search Latency**: < 200ms for semantic searches
- **System Uptime**: 99.9% availability
- **Response Accuracy**: > 95% relevant results
- **Concurrent Users**: Support 100+ simultaneous users
### Business Metrics
- **Agent Utilization**: Track usage patterns across service areas
- **Client Satisfaction**: Measure deliverable quality and client feedback
- **Engagement Growth**: Monitor search volume and agent deployment trends
- **Revenue Impact**: Track consultancy engagement value and efficiency gains
This deployment package provides everything needed to transform the Claude Sub-Agents system into a professional consultancy platform. The combination of specialized agents, semantic search capabilities, and enterprise-grade infrastructure creates a powerful foundation for AI-enhanced consulting services.