UNPKG

ai.libx.js

Version:

Unified API bridge for various AI models (LLMs, image/video generation, TTS, STT) - stateless, edge-compatible

386 lines (307 loc) â€Ē 10.7 kB
# ai.libx.js A unified, stateless API bridge for various AI models including LLMs, image/video generation, TTS, and STT. Edge-compatible and designed for use in serverless environments like Vercel Edge Functions and Cloudflare Workers. ## Features - 🚀 **Unified API** - Single interface for multiple AI providers - 🔌 **11 Providers** - OpenAI, Anthropic, Google, Groq, Mistral, Cohere, XAI, DeepSeek, AI21, OpenRouter, Cloudflare - 🌊 **Streaming Support** - Real-time streaming responses from all compatible providers - 📝 **Plain Text Mode** - Raw text output without JSON wrapping - 🔍 **Smart Model Resolution** - Use short aliases like `claude`, `gpt4o`, `gemini` instead of full names - ðŸĪ– **Model Normalization** - Intelligent alias resolution (e.g., `gpt-5` → `chatgpt-4o-latest`) - 🧠 **Reasoning Model Support** - Automatic detection and parameter adjustment for o1/o3/R1 models - 📊 **Request Logging** - Built-in metrics tracking with detailed statistics - 🖞ïļ **Multi-Modal Ready** - Framework support for images (implementation per adapter) - ðŸŠķ **Stateless Design** - No state management, pass API keys per request or globally - 🌐 **Edge Compatible** - Works with Vercel Edge Functions and Cloudflare Workers - ðŸ“Ķ **Tree-Shakeable** - Import only what you need - 🔒 **Type-Safe** - Full TypeScript support with comprehensive types - ⚡ **Zero Dependencies** - No external runtime dependencies ## Installation ```bash npm install ai.libx.js ``` ## Usage ### Pattern 1: Generic Client (Runtime Model Selection) ```typescript import AIClient from 'ai.libx.js'; // Initialize with API keys and enable logging const ai = new AIClient({ apiKeys: { openai: process.env.OPENAI_API_KEY, anthropic: process.env.ANTHROPIC_API_KEY, google: process.env.GOOGLE_API_KEY, }, enableLogging: true // Track metrics }); // Non-streaming chat (with smart model resolution) const response = await ai.chat({ model: 'gpt4o', // Short alias instead of 'openai/gpt-4o' messages: [ { role: 'user', content: 'Hello!' } ], }); console.log(response.content); console.log(ai.getStats()); // View metrics // Streaming chat const stream = await ai.chat({ model: 'sonnet', // Short alias instead of 'anthropic/claude-3-5-sonnet-latest' messages: [ { role: 'user', content: 'Write a story' } ], stream: true, }); for await (const chunk of stream) { process.stdout.write(chunk.content); } // Plain text mode (raw output) const plainResponse = await ai.chat({ model: 'openai/gpt-4o', messages: [{ role: 'user', content: 'Hello!' }], plain: true // Returns plain text }); ``` ### Pattern 2: Direct Provider Adapter ```typescript import { OpenAIAdapter, AnthropicAdapter } from 'ai.libx.js/adapters'; // Work directly with a specific provider const openai = new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY }); const response = await openai.chat({ model: 'gpt-4o', // No vendor prefix needed messages: [{ role: 'user', content: 'Hello!' }], temperature: 0.7, }); ``` ## API Reference ### AIClient #### Constructor Options ```typescript const ai = new AIClient({ apiKeys?: Record<string, string>; // API keys by provider baseUrls?: Record<string, string>; // Custom base URLs cloudflareAccountId?: string; // For Cloudflare Workers AI }); ``` #### chat(options) ```typescript await ai.chat({ model: string; // Format: "provider/model-name" messages: Message[]; // Conversation messages apiKey?: string; // Override global API key temperature?: number; // 0-2, default varies by provider maxTokens?: number; // Max tokens to generate topP?: number; // 0-1, nucleus sampling topK?: number; // Top-k sampling frequencyPenalty?: number; // -2 to 2 presencePenalty?: number; // -2 to 2 stop?: string | string[]; // Stop sequences stream?: boolean; // Enable streaming providerOptions?: object; // Provider-specific options }); ``` ### Supported Providers | Provider | Prefix | Models | |----------|--------|---------| | OpenAI | `openai/` | GPT-4, GPT-3.5, etc. | | Anthropic | `anthropic/` | Claude 3/4 series | | Google | `google/` | Gemini 1.0/1.5/2.0 | | Groq | `groq/` | LLaMA, Mixtral, Gemma | | Mistral | `mistral/` | Mistral, Mixtral series | | Cohere | `cohere/` | Command series | | XAI | `xai/` | Grok series | | DeepSeek | `deepseek/` | DeepSeek V3, R1 | | AI21 | `ai21/` | Jamba, Jurassic | | OpenRouter | `openrouter/` | Multi-model proxy | | Cloudflare | `cloudflare/` | Workers AI models | ### Types ```typescript interface Message { role: 'system' | 'user' | 'assistant' | 'tool'; content: string; name?: string; tool_call_id?: string; } interface ChatResponse { content: string; finishReason?: string; usage?: { promptTokens: number; completionTokens: number; totalTokens: number; }; model: string; raw?: any; // Original provider response } interface StreamChunk { content: string; finishReason?: string; index?: number; } ``` ## Provider-Specific Notes ### Cloudflare Workers AI Requires account ID: ```typescript const ai = new AIClient({ apiKeys: { cloudflare: 'YOUR_API_KEY' }, cloudflareAccountId: 'YOUR_ACCOUNT_ID', }); ``` ### OpenRouter Supports custom headers: ```typescript await ai.chat({ model: 'openrouter/meta-llama/llama-3-70b-instruct', messages: [...], providerOptions: { httpReferer: 'https://yourapp.com', xTitle: 'Your App Name', } }); ``` ## Edge Runtime Compatibility This library is designed to work in edge environments: ```typescript // Vercel Edge Function export const config = { runtime: 'edge' }; export default async function handler(req: Request) { const ai = new AIClient({ apiKeys: { openai: process.env.OPENAI_API_KEY } }); const response = await ai.chat({ model: 'openai/gpt-4o-mini', messages: [{ role: 'user', content: 'Hello!' }], }); return new Response(JSON.stringify(response)); } ``` ## Error Handling ```typescript import { AILibError, AuthenticationError, InvalidRequestError, RateLimitError, ModelNotFoundError, ProviderError } from 'ai.libx.js'; try { const response = await ai.chat({...}); } catch (error) { if (error instanceof AuthenticationError) { console.error('Invalid API key'); } else if (error instanceof RateLimitError) { console.error('Rate limit exceeded'); } else if (error instanceof ModelNotFoundError) { console.error('Model not found'); } } ``` ## Model Utilities ### Model Registry ```typescript import { supportedModels, getModelInfo, listModels, isModelSupported, getProviderFromModel } from 'ai.libx.js'; // Get model info const info = getModelInfo('openai/gpt-4o'); console.log(info?.displayName); // "GPT-4o" // List all models for a provider const openaiModels = listModels('openai'); // Check if model is supported if (isModelSupported('anthropic/claude-3-5-sonnet-latest')) { // ... } // Extract provider from model string const provider = getProviderFromModel('openai/gpt-4o'); // "openai" ``` ### Model Resolution (Fuzzy Matching) Use short aliases or partial names instead of full model identifiers: ```typescript import { resolveModel } from 'ai.libx.js'; // Common aliases resolveModel('claude'); // → 'anthropic/claude-haiku-4-5' resolveModel('sonnet'); // → 'anthropic/claude-sonnet-4-5' resolveModel('opus'); // → 'anthropic/claude-opus-4-1' resolveModel('gpt4o'); // → 'openai/gpt-4o' resolveModel('gemini'); // → 'google/models/gemini-2.5-flash' resolveModel('llama4'); // → 'groq/meta-llama/llama-4-scout-17b-16e-instruct' resolveModel('deepseek'); // → 'deepseek/deepseek-chat' resolveModel('grok3'); // → 'xai/grok-3-beta' // Exact matches pass through unchanged resolveModel('openai/gpt-4o'); // → 'openai/gpt-4o' // Non-existent models return unchanged resolveModel('invalid'); // → 'invalid' // Use directly in chat const ai = new AIClient({ apiKeys: {...} }); await ai.chat({ model: 'claude', // Automatically resolved to full model name messages: [{ role: 'user', content: 'Hello!' }] }); ``` **Features:** - ✅ Case-insensitive matching - ✅ Normalized matching (e.g., `gpt4o` → `gpt-4o`) - ✅ Partial name matching - ✅ Display name matching - ✅ Skips disabled models automatically - ✅ Returns original input if no match found (fail-safe) ### Model Normalization ```typescript import { normalizeModelName, isReasoningModel, supportsSystemMessages, getReasoningModelAdjustments, requiresMaxCompletionTokens } from 'ai.libx.js'; // Resolve model aliases normalizeModelName('gpt-5'); // → 'chatgpt-4o-latest' normalizeModelName('claude-4'); // → 'claude-sonnet-4-0' normalizeModelName('gemini'); // → 'models/gemini-2.0-flash' // Check for reasoning models isReasoningModel('openai/o1-preview'); // true isReasoningModel('deepseek/deepseek-reasoner'); // true // Check system message support supportsSystemMessages('openai/o1-preview'); // false (o1 doesn't support) supportsSystemMessages('openai/gpt-4o'); // true // Check if model requires max_completion_tokens requiresMaxCompletionTokens('openai/gpt-5-nano'); // true (GPT-5 models) requiresMaxCompletionTokens('openai/o1-preview'); // true (o1/o3 models) requiresMaxCompletionTokens('openai/gpt-4o'); // false (standard models) // Get required parameter adjustments const adjustments = getReasoningModelAdjustments('openai/o3-mini'); // { temperature: 1, topP: 1, useMaxCompletionTokens: true } ``` ### Request Logging ```typescript const ai = new AIClient({ apiKeys: { openai: 'sk-...' }, enableLogging: true }); // Make requests... await ai.chat({ model: 'openai/gpt-4o', messages: [...] }); // Get statistics const stats = ai.getStats(); console.log(stats); // { // totalRequests: 10, // successfulRequests: 9, // failedRequests: 1, // averageLatency: 1234, // totalTokensUsed: 12500, // providerBreakdown: { // openai: { requests: 6, avgLatency: 1100, tokens: 8000 } // } // } ``` ## Examples See [example.ts](./example.ts) for complete usage examples. ## License MIT ## Contributing Contributions are welcome! This library is designed to be extensible - adding new providers is straightforward by implementing the `IProviderAdapter` interface.