openai-plugins
Version:
A TypeScript library that provides an OpenAI-compatible client for the Model Context Protocol (MCP).
130 lines (94 loc) • 3.37 kB
Markdown
# openai-mcp
A TypeScript library that provides an OpenAI-compatible client for the Model Context Protocol (MCP).
## Installation
```bash
npm install openai-mcp
```
## Features
- OpenAI API compatibility - works as a drop-in replacement for the OpenAI client
- Connects to local or remote Model Context Protocol servers
- Supports tool use and function calling
- Rate limiting and retry logic built in
- Configurable logging
- TypeScript type definitions included
## Usage
```typescript
import { OpenAI } from 'openai-mcp';
// Create an OpenAI-compatible client connected to an MCP server
const openai = new OpenAI({
mcp: {
// MCP server URL(s) to connect to
serverUrls: ['http://localhost:3000/mcp'],
// Optional: set log level (debug, info, warn, error)
logLevel: 'info',
// Additional configuration options
// modelName: 'gpt-4', // Default model to use
// disconnectAfterUse: true, // Auto-disconnect after use
// maxToolCalls: 15, // Max number of tool calls per conversation
// toolTimeoutSec: 60, // Timeout for tool calls
}
});
// Use the client like a standard OpenAI client
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello, how are you today?' }
]
});
console.log(response.choices[0].message.content);
```
## Logging Configuration
```typescript
import { setMcpLogLevel } from 'openai-mcp';
// Set log level to one of: 'debug', 'info', 'warn', 'error'
setMcpLogLevel('info');
```
## Environment Variables
The library also supports configuration through environment variables:
```
# MCP Server URL(s) - comma separated for multiple servers
MCP_SERVER_URL=http://localhost:3000/mcp
# API Keys for different model providers
OPENAI_API_KEY=your-openai-api-key
ANTHROPIC_API_KEY=your-anthropic-api-key
GEMINI_API_KEY=your-gemini-api-key
```
## Multi-Model Support
The library supports routing requests to different model providers based on the model name:
```typescript
import { OpenAI } from 'openai-mcp';
const openai = new OpenAI();
// Uses OpenAI API
const gpt4Response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello GPT-4' }]
});
// Uses Anthropic API
const claudeResponse = await openai.chat.completions.create({
model: 'claude-3',
messages: [{ role: 'user', content: 'Hello Claude' }]
});
// Uses Google Gemini API
const geminiResponse = await openai.chat.completions.create({
model: 'gemini-pro',
messages: [{ role: 'user', content: 'Hello Gemini' }]
});
```
## Examples
The `examples/` directory contains various usage examples:
- **[Basic Usage](examples/basic-usage.ts)**: Simple chat completion request
- **[Streaming](examples/streaming.ts)**: Stream responses token by token
- **[Multi-Model](examples/multi-model.ts)**: Use OpenAI, Anthropic, and Gemini models
- **[Tools Usage](examples/tools-usage.ts)**: Function/tool calling with MCP
- **[Custom Logging](examples/custom-logging.ts)**: Configure and use the logging system
See the [Examples README](examples/README.md) for more details on running these examples.
## Development
To build the library:
```bash
npm run build
```
To run tests:
```bash
npm test
```