UNPKG

@hivetechs/hive-ai

Version:

Real-time streaming AI consensus platform with HTTP+SSE MCP integration for Claude Code, VS Code, Cursor, and Windsurf - powered by OpenRouter's unified API

1,122 lines (776 loc) 38.9 kB
# 🐝 hive-tools - Multi-Model Consensus Platform **Why trust one AI when you can trust them all?** Eliminate AI hallucinations through our revolutionary 4-stage consensus pipeline. hive-tools combines multiple AI models to deliver production-ready, trustworthy responses for mission-critical applications. ## ✨ What You Get ### 🆓 Free Tier (No Credit Card Required) - **5 daily conversations** - Perfect for trying our consensus technology - **Single-model AI queries** - Access to OpenAI, Anthropic, Google, and more - **IDE integration** - Works with VS Code, Cursor, Windsurf - **Provider management** - Configure and test multiple AI providers ### 🚀 Premium Features (Start $5/month) - **7-day FREE trial** with unlimited conversations - **4-stage consensus pipeline** - Generator → Refiner → Validator → Curator - **323+ models** from 55+ providers with intelligent selection - **Advanced analytics** - Cost tracking, performance monitoring - **Team collaboration** - Shared usage pools and admin controls **[🎯 Start Your Free Trial →](https://hivetechs.io/pricing)** ## 🚀 Quick Start ### Installation Options **Option 1: NPM (Recommended)** ```bash npm install -g @hivetechs/hive-ai ``` **Option 2: Clone & Build** ```bash git clone https://github.com/hivetechs-collective/hive.ai.git cd hive.ai npm install && npm run build ``` **Option 3: GitHub Release (Manual)** ```bash # Download binary for your platform from: # https://github.com/hivetechs-collective/hive.ai/releases ``` ## 🔄 Staying Updated ### Why Keep Updated? -**Latest AI models** from 323+ providers -**Security patches** and bug fixes -**New features** and consensus improvements -**Performance optimizations** -**Compatibility** with latest IDEs ### Check Your Current Version ```bash # Check your current version hive --version # Or get detailed version info npm list -g @hivetechs/hive-ai ``` ### Automatic Update System (NEW!) hive-tools now includes an intelligent auto-update system: **Check for Updates:** ```bash hive update-check # Check for package updates hive update-status # Show detailed update status ``` **Configure Notifications:** ```bash hive update-configure # Set update preferences ``` **Features:** - 🔔 **Smart Notifications** - Get notified when updates are available - 📅 **Scheduled Checks** - Weekly background checks (configurable) - 🎯 **Update Types** - Major, minor, and patch update filtering - 🛡️ **Non-Intrusive** - Silent background operation ### Manual Update Commands **Update to Latest Beta Version:** ```bash npm install -g @hivetechs/hive-ai@beta ``` **Update to Latest Stable Version:** ```bash npm update -g @hivetechs/hive-ai ``` **Get Beta Features (Optional):** ```bash npm install -g @hivetechs/hive-ai@beta ``` ### When to Update **Update immediately when:** - 🚨 Security updates are released - 🐛 You encounter known bugs - 🆕 New AI models are added - ⚙️ IDE compatibility issues arise **Check for updates:** - 📅 **Weekly**: For active development projects - 📅 **Monthly**: For production environments - 📅 **As needed**: When new features are announced ### Update Process 1. **Backup your configuration** (optional - but recommended): ```bash # Your settings are in ~/.hive-ai/ and are preserved during updates ls ~/.hive-ai/ ``` 2. **Update the package**: ```bash npm update -g @hivetechs/hive-ai ``` 3. **Verify the update**: ```bash hive --version ``` 4. **Test basic functionality**: ```bash hive configure --help ``` > 💡 **Pro Tip**: Your license keys, provider configurations, and conversation history are automatically preserved during updates. ### Troubleshooting Updates **If update fails:** ```bash # Clear npm cache and retry npm cache clean --force npm install -g @hivetechs/hive-ai@latest ``` **If permissions error on macOS/Linux:** ```bash # Use sudo (not recommended long-term) sudo npm update -g @hivetechs/hive-ai # Better: Fix npm permissions permanently npm config set prefix ~/.npm-global export PATH=~/.npm-global/bin:$PATH ``` **Reset to clean state if needed:** ```bash # Uninstall and reinstall npm uninstall -g @hivetechs/hive-ai npm install -g @hivetechs/hive-ai ``` ### Release Notes & What's New Stay informed about new features and changes: - 📢 **Release announcements**: [hivetechs.io/changelog](https://hivetechs.io/changelog) - 📧 **Email updates**: Subscribe at [hivetechs.io/updates](https://hivetechs.io/updates) - 🐙 **GitHub releases**: [github.com/hivetechs-collective/hive.ai/releases](https://github.com/hivetechs-collective/hive.ai/releases) ### First Steps **🚨 IMPORTANT: Get Your License First** Before using hive-tools, you need a license (free, trial, or premium): 1. **Get FREE License**: Visit [hivetechs.io/pricing](https://hivetechs.io/pricing) 2. **Start FREE Trial**: 7 days unlimited at [hivetechs.io/pricing](https://hivetechs.io/pricing) 3. **Configure License**: ```bash hive-ai configure --license YOUR_LICENSE_KEY ``` **Then Start Using:** ```bash # Setup wizard hive-ai setup wizard # Or configure your own API keys hive-ai provider configure openai YOUR_OPENAI_KEY ``` **No license = No functionality**. All features require a valid license key from hivetechs.io. The onboarding assistant will guide you through: 1. License key validation 2. IDE configuration (VS Code, Cursor, or Windsurf) 3. Adding one AI provider for immediate use 4. Testing your first consensus query ## 📋 Subscription Plans & Conversation Limits | Plan | Daily Limit | Features | |------|-------------|----------| | **Free** | 10 conversations | All features included | | **Basic** | 50 conversations | All features included | | **Standard** | 100 conversations | All features included | | **Premium** | 200 conversations | All features included | | **Team** | Unlimited | All features included | Visit [hivetechs.io/pricing](https://hivetechs.io/pricing) to upgrade your plan. After completing the onboarding process, you'll be ready to use hive-tools in your IDE by typing `@hive-tools.` followed by a command. ## 🚀 Transformative Features ### 🧠 4-Stage Consensus Pipeline hive-tools's core innovation is its unique 4-stage consensus pipeline that transforms user queries into exceptionally high-quality responses: 1. **Generator Stage** (GPT-3.5-Turbo) - Creates comprehensive initial responses with broad topic coverage 2. **Refiner Stage** (GPT-4-Turbo) - Enhances clarity, corrects inaccuracies, and improves structure 3. **Validator Stage** (GPT-4-Turbo) - Verifies factual accuracy and performs critical reasoning checks 4. **Curator Stage** (GPT-3.5-Turbo) - Delivers polished, well-formatted responses with consistent tone ### 🔄 Thematic Knowledge Retrieval System Unlike standard AI assistants, hive-tools features an advanced thematic knowledge retrieval system that: - Automatically maintains conversation continuity across related topics - Identifies thematic relationships between seemingly disparate queries - Builds a comprehensive knowledge graph from user interactions - Provides context-aware responses without requiring explicit conversation references ### 🧩 Technical Domain Expertise hive-tools excels at specialized technical domains with deep understanding of: - Software Engineering & Programming Languages - Machine Learning & AI Systems - Database Technologies & Data Science - Cloud Infrastructure & DevOps - Security & Authentication - Web Development & Modern Frameworks ### 📚 Persistent Contextual Memory Our SQLite-based persistent storage ensures: - Long-term memory across sessions - Automatic context retrieval for related questions - Progressive knowledge building from user interactions - Intelligent response adaptation based on conversation history ## 🚀 Getting Started ### 1. Environment Configuration hive-tools requires OpenAI API access for its multi-model consensus pipeline. Configure your environment by creating a file at `src/env/keys.ts`: ```typescript export const OPENAI_API_KEY = "your_key_here"; ``` > ⚠️ **Security Note**: For production environments, we recommend using environment variables or a secure secrets management solution. ### 2. Installation Install all dependencies to power the advanced consensus pipeline and thematic knowledge retrieval system: ```bash npm install # or yarn install ``` ### 3. Build the Server Compile the TypeScript implementation of our multi-stage consensus pipeline: ```bash npm run build ``` ### 4. Database Initialization The SQLite database for persistent conversation memory will be automatically initialized on first startup. The system stores: - Conversation history with timestamped entries - Semantic topic embeddings for thematic retrieval - Conversation metadata for context persistence ### 5. Using the CLI Tool hive-tools offers a powerful CLI tool for easy interaction: ```bash # Start the clean interactive CLI (recommended) npm run cli # Or use the colorful CLI with syntax highlighting npm run cli:color # Run a direct query (quiet mode is the default, showing only the result) npm run cli -- consensus "What is the capital of France?" # Enable verbose mode for debugging or detailed output npm run cli -- --debug consensus "What is the capital of France?" ``` You can also install the CLI globally for easier access: ```bash # Install the CLI globally npm link # Now you can use the 'hive' command directly hive consensus "What is the capital of France?" # Or simply type your query directly - any unrecognized input is treated as a consensus query hive "What is the capital of France?" ``` For detailed CLI documentation, see [CLI.md](./CLI.md). Once the CLI starts, you'll see the `hive>` prompt. From there, you can run commands directly: ```bash hive> list_providers hive> configure_provider OpenAI sk-your-api-key-here hive> test_providers hive> configure_pipeline_interactive my_pipeline ``` ### 🚀 Shorthand Commands hive-tools supports intuitive shorthand commands for all tools, making interactions faster and more natural. These shortcuts work both in the CLI and when interacting with AI agents through your IDE. #### Main Tools - **Consensus**: `consensus`, `cons` - Example: `consensus what is quantum computing?` - **Capture**: `capture`, `cap` - Example: `capture My insights about React | React is a JavaScript library for building user interfaces | code_analysis | react,javascript` #### Profile Management - **List Profiles**: `list_profiles`, `profiles`, `lp` - Example: `profiles` - **Get Profile**: `get_profile`, `profile`, `gp` - Example: `profile default` - **Update Profile**: `update_profile`, `up` - Example: `update_profile default | {"temperature": 0.7}` #### Provider Configuration - **List Providers**: `list_providers`, `providers`, `lprov` - Example: `providers` - **Configure Provider**: `configure_provider`, `config_provider`, `cp` - Example: `configure_provider OpenAI sk-abc123 https://api.openai.com` - **Test Providers**: `test_providers`, `test`, `tp` - Example: `test_providers` or `test OpenAI:gpt-4` #### Pipeline Configuration - **List Pipeline Profiles**: `list_pipeline_profiles`, `pipelines`, `lpp` - Example: `pipelines` - **Configure Pipeline**: `configure_pipeline`, `config_pipeline`, `cpp` - Example: `configure_pipeline default OpenAI:gpt-4:0.7 Anthropic:claude-3-opus:0.5 Gemini:gemini-pro:0.3` - **Set Default Profile**: `set_default_profile`, `default_profile`, `sdp` - Example: `set_default_profile high_quality` #### Model Registry Management - **Update Model Registry**: `update_model_registry`, `update_registry`, `umr` - Example: `update_model_registry` - **Add Custom Model**: `add_custom_model`, `add_model`, `acm` - Example: `add_custom_model OpenAI gpt-4-turbo-preview GENERAL` - **List Models**: `list_models`, `models`, `lm` - Example: `list_models OpenAI` This interactive mode makes it much easier to work with hive-tools, as you don't need to prefix commands with `npm run cli --`. We recommend using this interactive mode for all your hive-tools configuration and usage. #### CLI Options - **Standard CLI** (`npm run cli`): Current CLI implementation with enhanced functionality - **Colorful CLI** (`npm run cli:color`): CLI with syntax highlighting for better readability - **Legacy CLI** (`npm run cli:old`): Original CLI implementation (for backward compatibility) > **Note**: The colorful CLI enhances the terminal experience with color-coded commands, parameters, and output messages, making it easier to read and navigate the CLI interface. ### 6. Configure MCP in Your IDE hive-tools integrates seamlessly with modern IDEs like Windsurf, Cursor, VS Code, and Zed. The easiest way to configure is using our automatic configuration script: ```bash npm run configure-ide ``` This script automatically generates all necessary configuration files for supported IDEs: - **Windsurf**: Configures `~/.codeium/windsurf/mcp_config.json` - **Cursor**: Configures `~/.cursor/mcp.json` - **VS Code**: Configures `.vscode/settings.json` - **Zed**: Configures `.zed/mcp_config.json` For detailed IDE-specific instructions, see the [IDE Configuration Guide](./ide-config/README.md). You can also manually configure your IDE using this configuration: ```json { "mcpServers": { "hive-tools": { "command": "node", "args": [ "/path/to/your/hive.ai/dist/index.js" ] } } } ``` > 📘 **Pro Tip**: Replace `/path/to/your/hive.ai/` with the actual path to your hive-tools installation. After configuration, you'll see the "hive-tools Consensus" tool available in your IDE. This single tool provides access to our entire multi-model pipeline and thematic knowledge system. For detailed integration guides, visit our [documentation portal](https://hive.ai/docs). ## 💪️ Available Tools hive-tools provides a comprehensive suite of 11 tools through the MCP server, organized into five categories. > **💡 CLI Usage Tip**: While examples below use the full `hive-tools.command` format for IDE integration, you can also use our interactive CLI by running `npm run cli` once and then typing commands directly at the `hive>` prompt without any prefix. ### 1️⃣ Core Consensus Tools #### `consensus` Use the hive-tools multi-model consensus pipeline to generate high-quality responses. ```bash hive-tools.consensus: What are the tradeoffs between microservices and monoliths? ``` Optional parameters: - `profile_id`: Specify a pipeline profile (default: the default profile) - `conversation_id`: Continue a specific conversation thread #### `capture` Capture insights, code analyses, and other valuable content into the hive-tools knowledge base. ```bash hive-tools.capture: title: "React Component Best Practices" content_type: "best_practice" content: "1. Use functional components with hooks instead of class components..." tags: ["react", "frontend", "components"] ``` Parameters: - `title`: Title for the captured content - `content_type`: Type of content ("code_analysis", "architecture_insight", "design_pattern", "best_practice", "general") - `content`: The main content to capture - `tags`: (Optional) Array of tags to categorize the content ### 2️⃣ Provider Configuration Tools #### `list_providers` List all configured LLM providers with masked API keys. ```bash hive-tools.list_providers ``` #### `configure_provider` Configure an LLM provider with an API key and optional base URL. ```bash hive-tools.configure_provider: OpenAI sk-your-api-key-here ``` ```bash hive-tools.configure_provider: OpenAI sk-your-api-key-here https://custom-endpoint.com ``` Format: `PROVIDER_NAME API_KEY [BASE_URL]` #### `test_providers` Test configured LLM providers to verify API keys and connectivity. ```bash hive-tools.test_providers ``` Test a specific provider with optional model: ```bash hive-tools.test_providers: OpenAI:gpt-4 ``` ### 3️⃣ Pipeline Configuration Tools #### `list_pipeline_profiles` List all configured pipeline profiles. ```bash hive-tools.list_pipeline_profiles ``` #### `configure_pipeline` Configure a pipeline profile with models for each stage. The order of models is extremely important! ```bash hive-tools.configure_pipeline: profile_name GENERATOR REFINER VALIDATOR [CURATOR] ``` Where: - `profile_name`: A simple name you choose for this configuration (e.g., "standard", "fast", "premium") - `GENERATOR`: The first model that creates initial content (position 1) - `REFINER`: The second model that improves the content (position 2) - `VALIDATOR`: The third model that checks facts (position 3) - `CURATOR`: The fourth model that formats the final output (position 4, optional) Each stage uses this format: `PROVIDER_NAME:MODEL_NAME[:TEMPERATURE]` Example: ```bash hive-tools.configure_pipeline: standard OpenAI:gpt-3.5-turbo:0.7 OpenAI:gpt-4:0.0 Anthropic:claude-3-haiku:0.0 ``` This creates a pipeline named "standard" where: - OpenAI's GPT-3.5 is the Generator (first position) - OpenAI's GPT-4 is the Refiner (second position) - Anthropic's Claude is the Validator (third position) - Claude is also used as the Curator since none was specified #### `configure_pipeline_interactive` Configure a pipeline profile interactively with guided model selection. ```bash hive-tools.configure_pipeline_interactive: my_interactive_pipeline ``` This interactive tool: 1. **Lists Available Providers**: Shows all configured providers (Anthropic, Grok, Gemini, etc.) 2. **Displays Available Models**: For each provider, shows available models with descriptions 3. **Guides Through Each Stage**: Walks you through configuring each pipeline stage: - Generator (creates initial content) - Refiner (improves the content) - Validator (checks for accuracy) - Curator (adds final polish) 4. **Sets Default Temperatures**: Suggests appropriate temperature settings for each stage **Benefits of Interactive Configuration:** - No need to memorize model names - Ensures provider compatibility - Suggests appropriate temperature values - Reduces configuration errors - Shows model descriptions to help with selection **Example Interactive Session:** ```plaintext Configuring pipeline profile: my_interactive_pipeline === Configuring Generator stage === Available providers: 1. Anthropic 2. Gemini 3. Grok Using provider: Anthropic Available models: claude-3-haiku-20240307 - Fast and efficient model for routine tasks claude-3-sonnet-20240229 - Balanced model for most use cases claude-3-opus-20240229 - Most capable model for complex tasks Selected model: claude-3-haiku-20240307 (temperature: 0.7) [Similar process repeats for Refiner, Validator, and Curator stages] ✅ Pipeline profile "my_interactive_pipeline" configured successfully! ``` #### `set_default_profile` Set a pipeline profile as the default for consensus operations. ```bash hive-tools.set_default_profile: standard ``` ### 4️⃣ Profile Management Tools #### `list_profiles` Lists available provider profiles in the hive-tools system. ```bash hive-tools.list_profiles ``` #### `get_profile` Gets details of a specific provider profile. ```bash hive-tools.get_profile: default ``` #### `update_profile` Updates a provider profile configuration. ```bash hive-tools.update_profile: profile_name: "default" profile_data: "{ ... profile configuration ... }" ``` ## 🔍 Configuration Guide for Beginners ### Setting Up Your AI Pipeline: Step-by-Step > 💡 **Success Tip**: Think of this as setting up a team of AI assistants, each with a specific job. By the end, you'll have your own customized AI team ready to work together! Think of the consensus pipeline as an assembly line with four stations, each handled by an AI model you choose. Here's how to set up your own custom pipeline in simple steps: ### Step 1: Start the CLI Tool Begin by launching the interactive CLI: ```bash npm run cli ``` Wait for the `hive>` prompt to appear, then proceed with the following steps directly at this prompt. ### Step 2: Configure Your Providers (Required First Step) **Important**: You must configure your providers with API keys before creating any pipelines. This is a required first step: ```bash hive> configure_provider OpenAI your-openai-key-here hive> configure_provider Anthropic your-anthropic-key-here hive> configure_provider Gemini your-gemini-key-here hive> configure_provider Grok your-grok-key-here ``` The system will automatically detect the appropriate base URLs for each provider. ### Step 3: Test Your Connections Make sure all your providers are working: ```bash hive> test_providers ``` ### Step 4: Create a Pipeline with Interactive Configuration (Recommended) The recommended way to create a pipeline is using our interactive configuration tool, which guides you through selecting the right providers and models: ```bash hive> configure_pipeline_interactive my_custom_pipeline ``` This interactive tool will: 1. Show you all available providers you've configured 2. List compatible models for each provider with descriptions 3. Allow you to select the best model for each pipeline stage 4. Suggest appropriate temperature settings based on each stage's purpose 5. Let you decide whether to include an optional Curator stage 6. Ask if you want to set this as your default pipeline ### Alternative: Manual Pipeline Configuration If you prefer, you can also manually configure pipelines for different needs: ```bash # Premium pipeline with top models hive> configure_pipeline premium Anthropic:claude-3-opus:0.7 OpenAI:gpt-4:0.3 OpenAI:gpt-4:0.1 Anthropic:claude-3-sonnet:0.5 # Balanced pipeline with mixed providers hive> configure_pipeline balanced OpenAI:gpt-4:0.7 Anthropic:claude-3-sonnet:0.3 Gemini:gemini-pro:0.1 OpenAI:gpt-3.5-turbo:0.5 # Budget-friendly pipeline hive> configure_pipeline basic OpenAI:gpt-3.5-turbo:0.7 OpenAI:gpt-3.5-turbo:0.0 OpenAI:gpt-3.5-turbo:0.0 ``` Each pipeline has four positions (the last one is optional): 1. **Generator**: Creates the first draft (like a writer) 2. **Refiner**: Improves the draft (like an editor) 3. **Validator**: Checks facts and accuracy (like a fact-checker) 4. **Curator**: Formats the final response (like a publisher) The numbers after model names (like 0.7 or 0.0) control creativity - higher means more creative, lower means more consistent. ### Step 5: Choose Your Default Pipeline Select which pipeline to use when you don't specify one: ```bash hive> set_default_profile premium ``` ### Step 6: Using Your AI Pipeline Once configured, you can use your pipeline directly from the CLI: ```bash # Ask a question using your default pipeline hive> consensus What's the best way to learn JavaScript? # Use a specific pipeline for this question only hive> consensus How do quantum computers work? --profile premium # Save insights to the knowledge base hive> capture "JavaScript Best Practices" best_practice "Always use const and let instead of var..." javascript,frontend ``` ### IDE Integration In your IDE, you'll use the full tool name format: ```plaintext hive-tools.consensus: What's the best way to learn JavaScript? hive-tools.consensus: prompt: "How do quantum computers work?" profile_id: "premium" ``` ### Interactive Pipeline Configuration Experience The `configure_pipeline_interactive` command provides a truly interactive experience that walks you through each step of creating a pipeline profile. Here's what to expect: ```plaintext === Configuring Generator stage === Available providers: 1. Gemini 2. OpenAI 3. Anthropic 4. Grok Select provider number: 3 Selected provider: Anthropic Available models: 1. claude-3-opus-20240229 - Most powerful Claude model for complex tasks 2. claude-3-sonnet-20240229 - Balanced model for most tasks 3. claude-3-haiku-20240307 - Fastest and most cost-effective Claude model Select model number: 1 Enter temperature (0.0-1.0) [default: 0.7]: 0.7 Selected model: claude-3-opus-20240229 (temperature: 0.7) ``` This approach ensures you select compatible models and configure your pipeline correctly without needing to memorize model names or provider compatibility. ### Quick Tips for Beginners - **Always configure providers first** before trying to create pipelines - The **order of models** in a pipeline is critical - position determines function! - You can mix and match different providers in the same pipeline - For important questions, use your best models at each stage - For quick answers, use faster/cheaper models - When in doubt, use `list_pipeline_profiles` to see your configurations - After creating a pipeline interactively, you can fine-tune it with the standard `configure_pipeline` command ## 🌡️ Understanding Temperature Settings Temperature is a crucial parameter that controls the randomness and creativity of AI model outputs. Understanding how to set temperature appropriately can significantly impact the quality and consistency of responses from your consensus pipeline. ### What is Temperature? Temperature is a hyperparameter that affects how the AI model selects the next token in a sequence. It's typically set between 0.0 and 1.0, with some models supporting higher values: - **Low temperature (0.0-0.3)**: More deterministic, focused, and consistent responses - **Medium temperature (0.4-0.7)**: Balanced creativity and coherence - **High temperature (0.8-1.0+)**: More random, creative, and diverse responses ### Temperature Effects by Value | Temperature | Characteristics | Best For | Avoid For | |-------------|----------------|----------|------------| | **0.0** | Highly deterministic, always selects the most probable token | Factual Q&A, code generation, structured data | Creative writing, brainstorming, casual conversation | | **0.1-0.3** | Very focused, consistent, and predictable | Technical documentation, definitions, instructions | Open-ended ideation, diverse alternatives | | **0.4-0.6** | Good balance of coherence and variety | Most general use cases, explanations | Highly formal or highly creative tasks | | **0.7-0.8** | Creative but still coherent | Content generation, brainstorming | Critical factual responses, code generation | | **0.9-1.0+** | Highly creative, sometimes unpredictable | Creative writing, idea generation, exploration | Technical accuracy, consistency between runs | ### Recommended Temperature Settings by Pipeline Stage Each stage of the consensus pipeline benefits from different temperature settings: 1. **Generator Stage**: **0.7-0.8** - *Rationale*: Higher temperature encourages broader exploration of ideas and more comprehensive initial responses - *Goal*: Generate diverse content that covers multiple aspects of the query 2. **Refiner Stage**: **0.4-0.6** - *Rationale*: Moderate temperature balances creativity with structure improvement - *Goal*: Enhance the content while maintaining coherence and adding valuable details 3. **Validator Stage**: **0.0-0.3** - *Rationale*: Low temperature ensures consistent fact-checking and error detection - *Goal*: Critically evaluate content for accuracy with minimal randomness 4. **Curator Stage**: **0.3-0.5** - *Rationale*: Moderate-low temperature provides consistent formatting while allowing some flexibility - *Goal*: Polish and format content with reliable, predictable results ### Temperature Strategies for Different Use Cases #### Technical Documentation - Generator: 0.5-0.6 (balanced initial draft) - Refiner: 0.3-0.4 (focused improvements) - Validator: 0.0-0.1 (strict accuracy checking) - Curator: 0.2-0.3 (consistent formatting) #### Creative Content - Generator: 0.8-0.9 (highly creative initial ideas) - Refiner: 0.6-0.7 (creative but more structured improvements) - Validator: 0.2-0.3 (fact checking while preserving style) - Curator: 0.4-0.5 (stylistic formatting) #### Balanced General-Purpose - Generator: 0.7 (moderately creative) - Refiner: 0.5 (balanced improvements) - Validator: 0.2 (focused fact checking) - Curator: 0.4 (balanced formatting) ### Fine-Tuning Temperature When setting temperature in your pipeline configuration, consider these tips: - **Start with defaults**: Begin with our recommended settings and adjust based on results - **Iterative refinement**: Test different temperatures and compare outputs - **Consider query complexity**: More complex queries often benefit from lower temperatures - **Balance across stages**: If one stage uses high temperature, consider lower temperatures in other stages - **Monitor consistency**: Higher temperatures increase variability between runs When configuring a pipeline with specific temperatures: ```bash hive> configure_pipeline custom_pipeline OpenAI:gpt-3.5-turbo:0.7 OpenAI:gpt-4-turbo:0.5 Anthropic:claude-3-haiku:0.2 OpenAI:gpt-3.5-turbo:0.4 ``` This creates a pipeline with carefully balanced temperature settings across all four stages. ## 💡 Recommended User Flow For the best experience with hive-tools, we recommend following this workflow: 1. **Start the CLI**: Run `npm run cli` to launch the interactive CLI 2. **Configure Providers**: Set up your API keys with `configure_provider` (required first step) 3. **Test Connections**: Verify your providers work with `test_providers` 4. **Create Pipeline Interactively**: Use `configure_pipeline_interactive` to create a pipeline 5. **Set as Default**: Make your new pipeline the default if desired 6. **Use the Pipeline**: Start using the `consensus` command with your configured pipeline This workflow ensures you have a properly configured system with the right models for each stage of the pipeline. ## 🧰️ Using hive-tools Consensus hive-tools revolutionizes the way you interact with AI assistants. Here's how to leverage its unique capabilities: ### Contextual Conversations Start asking technical questions and watch as hive-tools maintains context across related topics: - "What's the difference between REST and GraphQL?" - Later: "How would I implement authentication in each approach?" hive-tools automatically connects these related queries without you needing to reference the previous conversation. ### Multi-Stage Processing Every query passes through our comprehensive 4-stage pipeline: 1. **Generator** creates comprehensive initial responses 2. **Refiner** enhances clarity and structure 3. **Validator** verifies factual accuracy 4. **Curator** delivers polished, well-formatted results ### Technical Domain Expertise Test hive-tools's deep technical knowledge with complex questions spanning multiple domains: - Complex software architecture decisions - Machine learning implementation details - Database optimization strategies - System design considerations > 💡 **Tip**: For optimal results, ask follow-up questions on related topics to leverage hive-tools's thematic knowledge retrieval system. ## 📊 Architecture Overview ```plaintext src/ ├── tools/ │ └── hiveai/ │ ├── consensus.ts # 4-stage consensus pipeline implementation │ ├── provider-config.ts # Provider configuration tools │ ├── pipeline-config.ts # Pipeline profile management │ └── conversation-memory.ts # In-memory conversation tracking ├── storage/ │ ├── database.ts # SQLite database management │ ├── contextManager.ts # Persistent context management │ ├── knowledgeRetrieval.ts # Thematic relationship detection │ ├── topicTagging.ts # Technical domain topic extraction │ ├── userManager.ts # User identification and management │ └── cloudSync.ts # Cloud synchronization for user data ├── cloudflare/ │ ├── worker.js # Cloudflare Workers API implementation │ ├── schema.sql # D1 database schema │ └── wrangler.toml # Cloudflare Workers configuration ├── env/ │ └── keys.ts # Environment configuration └── index.ts # MCP server entry point ``` ## 🔐 User Identification System hive-tools includes a comprehensive user identification system that enables subscription management and usage tracking: ### Features - **Local-First Architecture**: User data is stored locally for privacy and offline resilience - **Multi-Device Support**: Users can register multiple devices under a single account - **Adaptive Verification**: Optimizes cloud API calls based on subscription tier - **Usage Tracking**: Monitors conversation counts with tier-based limits - **Cloud Synchronization**: Syncs usage data across devices via Cloudflare Workers ### Components 1. **Client-Side Implementation** - `userManager.ts`: Handles user creation, device registration, and usage tracking - `cloudSync.ts`: Manages communication with Cloudflare Workers API 2. **Cloudflare Workers Backend** - API endpoints for user verification, usage synchronization, and checkout - D1 database for storing user data in the cloud - Integration with Lemon Squeezy for subscription management ### Deployment Before deploying the Cloudflare Workers backend, you'll need to set up a Cloudflare account. See our [Cloudflare Setup Guide](./docs/cloudflare-setup.md) for detailed instructions. Once you have a Cloudflare account, you can deploy using our deployment script: ```bash # Navigate to the cloudflare directory cd src/cloudflare # Run the deployment script node deploy.js ``` The script will guide you through: - Logging in to Cloudflare - Creating a D1 database - Setting up API keys - Deploying the worker For more details on the user identification system, see the [Monetization Strategy](./docs/monetization-strategy.md) document. ## 📊 Consensus Pipeline Analysis hive-tools includes comprehensive analysis tools to help you understand and optimize the consensus pipeline. These tools analyze the SQLite database to provide insights into model performance, content transformations, and pipeline efficiency. ### Available Reports and Tools - **[Consensus Pipeline Report](./reports/consensus-pipeline-report.md)** - Detailed statistical analysis of the pipeline - **[Model Contribution Analysis](./reports/model-contribution-analysis.md)** - How different models contribute at each stage - **[Interactive Pipeline Visualization](./reports/consensus-pipeline-visualization.html)** - Visual exploration of pipeline data - **[Visualization Guide](./reports/visualization-guide.md)** - Guide to using the interactive visualization - **[Use Case Guide](./reports/use-case-guide.md)** - Practical applications and optimization strategies ### Key Insights Our analysis has revealed several important insights about the consensus pipeline: 1. **Pipeline Structure**: The 4-stage pipeline (Generator → Refiner → Validator → Curator) shows a pattern of content expansion followed by refinement. 2. **Model Performance**: - Generator: gpt-3.5-turbo is most commonly used - Refiner: gpt-4-turbo and grok-3-beta provide significant content expansion - Validator: gpt-4-turbo, grok-1, and gemini-2.0-flash make important corrections - Curator: gpt-3.5-turbo and claude-3-haiku provide final formatting 3. **Efficient Combinations**: - Fastest: gemini-pro → gpt-4 → grok-1 → gemini-pro (5.96s avg) - Most common: gpt-3.5-turbo → gpt-4-turbo → gpt-4-turbo → gpt-3.5-turbo (57.00s avg) For more detailed analysis and optimization recommendations, see the [reports directory](./reports/). ### Generating Your Own Analysis You can generate updated reports with the latest data using these scripts: ```bash # Generate comprehensive pipeline report node consensus-pipeline-report.js # Analyze model contributions node model-contribution-analysis.js # Create interactive visualization node consensus-pipeline-visualization.js ``` These tools help you understand how the consensus pipeline works and how to optimize it for your specific needs. ## 📬 Contact Us We'd love to hear from you! Reach out to us with any questions, feedback, or partnership opportunities: - **General Inquiries**: [hello@hivetechs.io](mailto:hello@hivetechs.io) - **Technical Support**: [support@hivetechs.io](mailto:support@hivetechs.io) - **Information Requests**: [info@hivetechs.io](mailto:info@hivetechs.io) - **Phone**: (813) 400-0871 ### Business Address HiveTechs Collective LLC 7901 4th St N STE 300 St. Petersburg, FL 33702 ## 🌐 Learn More Visit [hivetechs.io](https://hivetechs.io) to learn more about our revolutionary approach to AI consensus and context-aware conversations. --- ## 📜 License & Legal Notice **⚠️ PROPRIETARY SOFTWARE - COMMERCIAL LICENSING REQUIRED** hive-tools contains proprietary algorithms, trade secrets, and intellectual property owned exclusively by **HiveTechs Collective LLC**. ### 🚫 Unauthorized Use Strictly Prohibited Without explicit commercial licensing, the following are **PROHIBITED**: - Commercial use in any form or capacity - Reverse engineering or extracting proprietary algorithms - Creating derivative works or competing products - Using multi-model consensus methodologies in other products - Training AI models on proprietary outputs or methodologies ### ✅ Permitted Uses (Non-Commercial Only) - Personal projects and individual learning - Academic research with proper attribution - Educational use by students and researchers - 30-day commercial evaluation period ### 💼 Commercial Licensing Required For any commercial use, enterprise deployment, or production environment: - **Licensing Contact**: licensing@hivetechs.io - **Pricing Plans**: https://hivetechs.io/pricing - **Enterprise Sales**: enterprise@hivetechs.io ### ⚖️ Legal Protection This software is protected by: - U.S. and international copyright law - Patent applications and trade secret protections - Proprietary consensus algorithms and AI methodologies - Advanced multi-model optimization techniques ### 🔒 Enforcement Unauthorized commercial use will result in: - Immediate legal action for damages and injunctive relief - Recovery of attorney fees and litigation costs - Potential criminal prosecution under applicable law **For complete license terms, see the [LICENSE](LICENSE) file.** ### 📞 Contact - **Legal/Licensing**: legal@hivetechs.io - **Technical Support**: support@hivetechs.io - **General Inquiries**: hello@hivetechs.io - **Website**: https://hivetechs.io --- **Copyright © 2025 HiveTechs Collective LLC. All rights reserved.** *hive-tools Professional Software License v2.0 (Enhanced Protection)*