revenium-middleware-litellm-node
Version:
Comprehensive middleware for Node.js applications using LiteLLM Proxy to automatically track LLM usage, costs, and performance metrics with Revenium
189 lines (138 loc) • 7.63 kB
Markdown
# Revenium Middleware for LiteLLM Node.js
[](https://badge.fury.io/js/revenium-middleware-litellm-node)
A comprehensive Node.js middleware that automatically tracks LiteLLM Proxy usage and sends metrics to a LiteLLM server as well as Revenium for billing and analytics. Features seamless HTTP interception with support for all LiteLLM providers - no code changes required! Works with both TypeScript and JavaScript projects.
## Features
- ✅ **Seamless HTTP interception** - Automatically tracks all LiteLLM Proxy requests
- ✅ **Multi-provider support** - Works with OpenAI, Anthropic, Google, Azure, Cohere, and more
- ✅ **Chat completions & embeddings** - Full support for both operation types
- ✅ **Streaming support** - Real-time tracking with time-to-first-token metrics
- ✅ **Fire-and-forget tracking** - Will not block application execution with metering updates
- ✅ **Comprehensive analytics** - Track users, customers, and other custom metadata
- ✅ **LiteLLM proxy integration** - Purpose-built for LiteLLM's proxy architecture
## Installation
```bash
npm install revenium-middleware-litellm-node dotenv
npm install --save-dev typescript tsx @types/node # For TypeScript projects
```
## Environment Variables
Set your environment variables:
```bash
export REVENIUM_METERING_API_KEY=hak_your_revenium_api_key
export REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
export LITELLM_PROXY_URL=https://your-litellm-proxy.com
export LITELLM_API_KEY=your_litellm_api_key
export REVENIUM_DEBUG=true # Optional: for debug logging
```
Or create a `.env` file in the project root:
```bash
# .env file
REVENIUM_METERING_API_KEY=hak_your_revenium_api_key
REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
LITELLM_PROXY_URL=https://your-litellm-proxy.com
LITELLM_API_KEY=your_litellm_api_key
REVENIUM_DEBUG=true
```
## LiteLLM Proxy Server Installation
## Usage
Note that your proxy server will need to install the [Revenium python middleware](https://pypi.org/project/revenium-middleware-litellm) as well in order to pass metadata received by LiteLLM to the Revenium API.
## Quick Start Examples
Want to try it immediately? Access the examples directory to find sample scripts to demonstrate how to integrate Revenium's middleware into your existing code.
### Method 1: Run Built-in Examples
```bash
# 1. Install the package and dependencies
npm install revenium-middleware-litellm-node dotenv
npm install --save-dev typescript tsx @types/node
# 2. Set your API keys (see Environment Variables above)
# 3. Run examples
REVENIUM_DEBUG=true npx tsx examples/litellm-basic.ts # Basic LiteLLM proxy usage with metadata
REVENIUM_DEBUG=true npx tsx examples/litellm-streaming.ts # Streaming, embeddings, and advanced features
```
## Advanced Usage
### Adding Custom Metadata
Track users, organizations, agents, API keys, and other custom metadata using the optional metadata fields
shown in the examples folder.
## LiteLLM Multi-Provider Features
**Universal LLM Support**: The middleware supports all LiteLLM providers with automatic usage tracking for both chat completions and embeddings.
## Configuration
### Supported Environment Variables
| Variable | Required | Description |
|----------|----------|-------------|
| `REVENIUM_METERING_API_KEY` | Yes | Your Revenium API key (starts with `hak_`) |
| `REVENIUM_METERING_BASE_URL` | No | Revenium API base URL (default: production) |
| `LITELLM_PROXY_URL` | Yes | Your LiteLLM Proxy URL (base URL or full endpoint) |
| `LITELLM_API_KEY` | No | LiteLLM API key (if proxy requires authentication) |
| `REVENIUM_DEBUG` | No | Set to `true` for debug logging |
### Metadata Headers
Metadata headers help provide better analytics and tracking:
```typescript
const headers = {
// Subscriber information (all 4 fields required)
'x-revenium-subscriber-id': 'user-123', // User ID from your system
'x-revenium-subscriber-email': 'user@example.com', // User's email address
'x-revenium-subscriber-credential-name': 'api-key', // Credential name
'x-revenium-subscriber-credential': 'credential-value', // Credential value
// Other optional metadata
'x-revenium-organization-id': 'org-456', // Organization/company ID
'x-revenium-product-id': 'chat-app', // Your product/feature ID
'x-revenium-task-type': 'document_analysis', // Type of AI task
'x-revenium-trace-id': 'trace-789', // Session or conversation ID
'x-revenium-agent': 'document-processor-v2' // AI agent identifier
};
```
### Running Examples
```bash
# 1. Install dependencies
npm install revenium-middleware-litellm-node dotenv
npm install --save-dev typescript tsx @types/node # For TypeScript projects
# 2. Set environment variables (or use .env file)
export REVENIUM_METERING_API_KEY=hak_your_api_key
export REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
export LITELLM_PROXY_URL=https://your-proxy.com
export LITELLM_API_KEY=your_litellm_key
# 3. Run examples
REVENIUM_DEBUG=true npx tsx examples/litellm-basic.ts # Basic LiteLLM proxy usage
REVENIUM_DEBUG=true npx tsx examples/litellm-streaming.ts # Streaming, embeddings, and advanced features
```
## How It Works
1. **HTTP Interception**: Patches the global `fetch` function to intercept LiteLLM Proxy requests
2. **Request Detection**: Identifies LiteLLM requests by URL pattern matching for both chat and embeddings
3. **Metadata Extraction**: Extracts usage metadata from request headers
4. **Response Processing**: Handles both streaming and non-streaming responses for chat and embeddings
5. **Usage Tracking**: Sends detailed metrics to Revenium API asynchronously
6. **Error Handling**: Implements retry logic and fails silently by default
The middleware never blocks your application - if Revenium tracking fails, your LiteLLM requests continue normally.
## Error Handling
Enable debug logging to troubleshoot:
```bash
export REVENIUM_DEBUG=true
```
### LiteLLM Troubleshooting
**Middleware not tracking requests:**
- Ensure middleware is imported before making fetch requests
- Check that environment variables are loaded correctly
- Verify your `REVENIUM_METERING_API_KEY` starts with `hak_`
- Confirm `LITELLM_PROXY_URL` matches your proxy setup
**LiteLLM proxy connection issues:**
- Verify LiteLLM proxy is running and accessible
- Check that `LITELLM_PROXY_URL` includes the correct base URL
- Ensure `LITELLM_API_KEY` is correct if proxy requires authentication
**403 Forbidden errors:**
- Verify your `REVENIUM_METERING_API_KEY` is correct
- Check that `REVENIUM_METERING_BASE_URL` doesn't have double paths
- Ensure your API key has the correct permissions
**Streaming not being tracked:**
- Streaming usage is tracked when the stream completes
- Check debug logs for stream completion messages
- Ensure you're consuming the entire stream
**Embeddings not being tracked:**
- Verify the endpoint URL includes `/embeddings` or `/v1/embeddings`
- Check that the request body includes the `model` field
- Ensure the response includes usage information
Look for log messages like:
- `[Revenium] LiteLLM request intercepted`
- `[Revenium] Usage metadata extracted`
- `[Revenium] Revenium tracking successful`
## Requirements
- Node.js 16+
- LiteLLM Proxy server (local or hosted) with Revenium's [python middleware](https://pypi.org/project/revenium-middleware-litellm) installed
- TypeScript 5.0+ (for TypeScript projects)