ai.libx.js
Version:
Unified API bridge for various AI models (LLMs, image/video generation, TTS, STT) - stateless, edge-compatible
386 lines (307 loc) âĒ 10.7 kB
Markdown
# ai.libx.js
A unified, stateless API bridge for various AI models including LLMs, image/video generation, TTS, and STT. Edge-compatible and designed for use in serverless environments like Vercel Edge Functions and Cloudflare Workers.
## Features
- ð **Unified API** - Single interface for multiple AI providers
- ð **11 Providers** - OpenAI, Anthropic, Google, Groq, Mistral, Cohere, XAI, DeepSeek, AI21, OpenRouter, Cloudflare
- ð **Streaming Support** - Real-time streaming responses from all compatible providers
- ð **Plain Text Mode** - Raw text output without JSON wrapping
- ð **Smart Model Resolution** - Use short aliases like `claude`, `gpt4o`, `gemini` instead of full names
- ðĪ **Model Normalization** - Intelligent alias resolution (e.g., `gpt-5` â `chatgpt-4o-latest`)
- ð§ **Reasoning Model Support** - Automatic detection and parameter adjustment for o1/o3/R1 models
- ð **Request Logging** - Built-in metrics tracking with detailed statistics
- ðžïļ **Multi-Modal Ready** - Framework support for images (implementation per adapter)
- ðŠķ **Stateless Design** - No state management, pass API keys per request or globally
- ð **Edge Compatible** - Works with Vercel Edge Functions and Cloudflare Workers
- ðĶ **Tree-Shakeable** - Import only what you need
- ð **Type-Safe** - Full TypeScript support with comprehensive types
- ⥠**Zero Dependencies** - No external runtime dependencies
## Installation
```bash
npm install ai.libx.js
```
## Usage
### Pattern 1: Generic Client (Runtime Model Selection)
```typescript
import AIClient from 'ai.libx.js';
// Initialize with API keys and enable logging
const ai = new AIClient({
apiKeys: {
openai: process.env.OPENAI_API_KEY,
anthropic: process.env.ANTHROPIC_API_KEY,
google: process.env.GOOGLE_API_KEY,
},
enableLogging: true // Track metrics
});
// Non-streaming chat (with smart model resolution)
const response = await ai.chat({
model: 'gpt4o', // Short alias instead of 'openai/gpt-4o'
messages: [
{ role: 'user', content: 'Hello!' }
],
});
console.log(response.content);
console.log(ai.getStats()); // View metrics
// Streaming chat
const stream = await ai.chat({
model: 'sonnet', // Short alias instead of 'anthropic/claude-3-5-sonnet-latest'
messages: [
{ role: 'user', content: 'Write a story' }
],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.content);
}
// Plain text mode (raw output)
const plainResponse = await ai.chat({
model: 'openai/gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
plain: true // Returns plain text
});
```
### Pattern 2: Direct Provider Adapter
```typescript
import { OpenAIAdapter, AnthropicAdapter } from 'ai.libx.js/adapters';
// Work directly with a specific provider
const openai = new OpenAIAdapter({
apiKey: process.env.OPENAI_API_KEY
});
const response = await openai.chat({
model: 'gpt-4o', // No vendor prefix needed
messages: [{ role: 'user', content: 'Hello!' }],
temperature: 0.7,
});
```
## API Reference
### AIClient
#### Constructor Options
```typescript
const ai = new AIClient({
apiKeys?: Record<string, string>; // API keys by provider
baseUrls?: Record<string, string>; // Custom base URLs
cloudflareAccountId?: string; // For Cloudflare Workers AI
});
```
#### chat(options)
```typescript
await ai.chat({
model: string; // Format: "provider/model-name"
messages: Message[]; // Conversation messages
apiKey?: string; // Override global API key
temperature?: number; // 0-2, default varies by provider
maxTokens?: number; // Max tokens to generate
topP?: number; // 0-1, nucleus sampling
topK?: number; // Top-k sampling
frequencyPenalty?: number; // -2 to 2
presencePenalty?: number; // -2 to 2
stop?: string | string[]; // Stop sequences
stream?: boolean; // Enable streaming
providerOptions?: object; // Provider-specific options
});
```
### Supported Providers
| Provider | Prefix | Models |
|----------|--------|---------|
| OpenAI | `openai/` | GPT-4, GPT-3.5, etc. |
| Anthropic | `anthropic/` | Claude 3/4 series |
| Google | `google/` | Gemini 1.0/1.5/2.0 |
| Groq | `groq/` | LLaMA, Mixtral, Gemma |
| Mistral | `mistral/` | Mistral, Mixtral series |
| Cohere | `cohere/` | Command series |
| XAI | `xai/` | Grok series |
| DeepSeek | `deepseek/` | DeepSeek V3, R1 |
| AI21 | `ai21/` | Jamba, Jurassic |
| OpenRouter | `openrouter/` | Multi-model proxy |
| Cloudflare | `cloudflare/` | Workers AI models |
### Types
```typescript
interface Message {
role: 'system' | 'user' | 'assistant' | 'tool';
content: string;
name?: string;
tool_call_id?: string;
}
interface ChatResponse {
content: string;
finishReason?: string;
usage?: {
promptTokens: number;
completionTokens: number;
totalTokens: number;
};
model: string;
raw?: any; // Original provider response
}
interface StreamChunk {
content: string;
finishReason?: string;
index?: number;
}
```
## Provider-Specific Notes
### Cloudflare Workers AI
Requires account ID:
```typescript
const ai = new AIClient({
apiKeys: { cloudflare: 'YOUR_API_KEY' },
cloudflareAccountId: 'YOUR_ACCOUNT_ID',
});
```
### OpenRouter
Supports custom headers:
```typescript
await ai.chat({
model: 'openrouter/meta-llama/llama-3-70b-instruct',
messages: [...],
providerOptions: {
httpReferer: 'https://yourapp.com',
xTitle: 'Your App Name',
}
});
```
## Edge Runtime Compatibility
This library is designed to work in edge environments:
```typescript
// Vercel Edge Function
export const config = { runtime: 'edge' };
export default async function handler(req: Request) {
const ai = new AIClient({
apiKeys: { openai: process.env.OPENAI_API_KEY }
});
const response = await ai.chat({
model: 'openai/gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }],
});
return new Response(JSON.stringify(response));
}
```
## Error Handling
```typescript
import {
AILibError,
AuthenticationError,
InvalidRequestError,
RateLimitError,
ModelNotFoundError,
ProviderError
} from 'ai.libx.js';
try {
const response = await ai.chat({...});
} catch (error) {
if (error instanceof AuthenticationError) {
console.error('Invalid API key');
} else if (error instanceof RateLimitError) {
console.error('Rate limit exceeded');
} else if (error instanceof ModelNotFoundError) {
console.error('Model not found');
}
}
```
## Model Utilities
### Model Registry
```typescript
import {
supportedModels,
getModelInfo,
listModels,
isModelSupported,
getProviderFromModel
} from 'ai.libx.js';
// Get model info
const info = getModelInfo('openai/gpt-4o');
console.log(info?.displayName); // "GPT-4o"
// List all models for a provider
const openaiModels = listModels('openai');
// Check if model is supported
if (isModelSupported('anthropic/claude-3-5-sonnet-latest')) {
// ...
}
// Extract provider from model string
const provider = getProviderFromModel('openai/gpt-4o'); // "openai"
```
### Model Resolution (Fuzzy Matching)
Use short aliases or partial names instead of full model identifiers:
```typescript
import { resolveModel } from 'ai.libx.js';
// Common aliases
resolveModel('claude'); // â 'anthropic/claude-haiku-4-5'
resolveModel('sonnet'); // â 'anthropic/claude-sonnet-4-5'
resolveModel('opus'); // â 'anthropic/claude-opus-4-1'
resolveModel('gpt4o'); // â 'openai/gpt-4o'
resolveModel('gemini'); // â 'google/models/gemini-2.5-flash'
resolveModel('llama4'); // â 'groq/meta-llama/llama-4-scout-17b-16e-instruct'
resolveModel('deepseek'); // â 'deepseek/deepseek-chat'
resolveModel('grok3'); // â 'xai/grok-3-beta'
// Exact matches pass through unchanged
resolveModel('openai/gpt-4o'); // â 'openai/gpt-4o'
// Non-existent models return unchanged
resolveModel('invalid'); // â 'invalid'
// Use directly in chat
const ai = new AIClient({ apiKeys: {...} });
await ai.chat({
model: 'claude', // Automatically resolved to full model name
messages: [{ role: 'user', content: 'Hello!' }]
});
```
**Features:**
- â
Case-insensitive matching
- â
Normalized matching (e.g., `gpt4o` â `gpt-4o`)
- â
Partial name matching
- â
Display name matching
- â
Skips disabled models automatically
- â
Returns original input if no match found (fail-safe)
### Model Normalization
```typescript
import {
normalizeModelName,
isReasoningModel,
supportsSystemMessages,
getReasoningModelAdjustments,
requiresMaxCompletionTokens
} from 'ai.libx.js';
// Resolve model aliases
normalizeModelName('gpt-5'); // â 'chatgpt-4o-latest'
normalizeModelName('claude-4'); // â 'claude-sonnet-4-0'
normalizeModelName('gemini'); // â 'models/gemini-2.0-flash'
// Check for reasoning models
isReasoningModel('openai/o1-preview'); // true
isReasoningModel('deepseek/deepseek-reasoner'); // true
// Check system message support
supportsSystemMessages('openai/o1-preview'); // false (o1 doesn't support)
supportsSystemMessages('openai/gpt-4o'); // true
// Check if model requires max_completion_tokens
requiresMaxCompletionTokens('openai/gpt-5-nano'); // true (GPT-5 models)
requiresMaxCompletionTokens('openai/o1-preview'); // true (o1/o3 models)
requiresMaxCompletionTokens('openai/gpt-4o'); // false (standard models)
// Get required parameter adjustments
const adjustments = getReasoningModelAdjustments('openai/o3-mini');
// { temperature: 1, topP: 1, useMaxCompletionTokens: true }
```
### Request Logging
```typescript
const ai = new AIClient({
apiKeys: { openai: 'sk-...' },
enableLogging: true
});
// Make requests...
await ai.chat({ model: 'openai/gpt-4o', messages: [...] });
// Get statistics
const stats = ai.getStats();
console.log(stats);
// {
// totalRequests: 10,
// successfulRequests: 9,
// failedRequests: 1,
// averageLatency: 1234,
// totalTokensUsed: 12500,
// providerBreakdown: {
// openai: { requests: 6, avgLatency: 1100, tokens: 8000 }
// }
// }
```
## Examples
See [example.ts](./example.ts) for complete usage examples.
## License
MIT
## Contributing
Contributions are welcome! This library is designed to be extensible - adding new providers is straightforward by implementing the `IProviderAdapter` interface.