@ririaru/mcp-gpt5-server
Version: 
Enhanced MCP server for GPT-5 with advanced features
221 lines (157 loc) • 5.02 kB
Markdown
# MCP GPT-5 Server Enhanced v2.0
Enhanced MCP (Model Context Protocol) server for accessing GPT-5 with advanced features including caching, reasoning levels, and verbosity control.
## 🚀 New Features in v2.0
- **TypeScript Support**: Full type safety and better IDE integration
- **Reasoning Levels**: Control GPT-5's reasoning effort (low/medium/high)
- **Verbosity Control**: Adjust response detail level (concise/normal/verbose)  
- **Response Caching**: Optional caching to reduce API calls
- **Enhanced Error Handling**: Detailed error types and retry logic
- **Web Search Integration**: Enable web search capabilities
- **Statistics Tracking**: Monitor usage and performance
- **Configuration Flexibility**: Environment variables for all settings
## Installation
### Quick Setup with Claude Code
```bash
claude mcp add -s user sk_gpt5 "npx @ririaru/mcp-gpt5-server"
```
### Manual Installation
```bash
npm install @ririaru/mcp-gpt5-server
```
## Usage
### Basic Query
```javascript
sk_gpt5("Hello, GPT-5!")
```
### Advanced Query with Options
```javascript
sk_gpt5({
  message: "Explain quantum computing",
  reasoning_effort: "high",
  verbosity: "verbose",
  max_tokens: 8000,
  web_search: true
})
```
### Cache Management
```javascript
// Get cache statistics
sk_gpt5_cache({ action: "stats" })
// Clear cache
sk_gpt5_cache({ action: "clear" })
```
### Usage Statistics
```javascript
sk_gpt5_stats()
```
## Configuration
### Environment Variables
Create a `.env` file based on `.env.example`:
```env
# API Configuration
GPT5_API_URL=https://mcpgpt5.vercel.app/api/messages
GPT5_DEFAULT_MODEL=gpt-5
# Default Parameters
GPT5_DEFAULT_REASONING=medium  # low, medium, high
GPT5_DEFAULT_VERBOSITY=normal  # concise, normal, verbose
GPT5_DEFAULT_MAX_TOKENS=4096
# Cache Configuration
GPT5_ENABLE_CACHE=true
GPT5_CACHE_EXPIRY=3600000  # 1 hour
# Debug Mode
GPT5_DEBUG=false
```
## API Reference
### sk_gpt5 Tool
Main tool for querying GPT-5.
**Parameters:**
- `message` (string, required): The message to send
- `model` (string, optional): Model to use (default: gpt-5)
- `reasoning_effort` (enum, optional): low/medium/high (default: medium)
- `verbosity` (enum, optional): concise/normal/verbose (default: normal)
- `max_tokens` (number, optional): Maximum response tokens
- `temperature` (number, optional): Sampling temperature (GPT-5 uses 1.0)
- `web_search` (boolean, optional): Enable web search
- `max_thinking_chars` (number, optional): Limit thinking characters
- `use_cache` (boolean, optional): Use cached response if available
### sk_gpt5_stats Tool
Get client statistics including request count, error rate, and cache usage.
### sk_gpt5_cache Tool
Manage the response cache.
**Parameters:**
- `action` (enum, required): "clear" or "stats"
## Advanced Features
### Reasoning Effort Levels
- **low**: Quick responses with minimal reasoning
- **medium**: Balanced reasoning and response time
- **high**: Deep reasoning for complex queries
### Verbosity Modes
- **concise**: Brief, direct responses
- **normal**: Standard response detail
- **verbose**: Comprehensive responses with examples
### Constraints
- Web search requires at least medium reasoning effort
- Temperature is fixed at 1.0 for GPT-5
- Maximum tokens capped at 10,000
## Development
### Building from Source
```bash
# Install dependencies
npm install
# Build TypeScript
npm run build
# Watch mode for development
npm run dev
```
### Testing Locally
```bash
# Run the MCP server
npm start
# Or directly
node dist/mcp-gpt5-bridge.js
```
## Deployment
### Vercel API Deployment
The `api/messages.js` file is ready for Vercel deployment:
```bash
vercel deploy
```
### NPM Publishing
```bash
npm version patch  # or minor/major
npm publish
```
## Error Handling
The server handles various error types:
- `api_error`: API request failures
- `timeout`: Request timeouts
- `network_error`: Network issues
- `validation_error`: Invalid parameters
- `constraint_error`: Constraint violations
- `cache_error`: Cache-related errors
## Performance Optimization
- Response caching reduces API calls
- Automatic retry with exponential backoff
- AbortController for timeout management
- Streaming support with fallback to non-streaming
## Requirements
- Node.js 18 or higher
- Claude Code with MCP support
- OpenAI API key (for Vercel deployment)
## License
MIT
## Contributing
Contributions are welcome! Please feel free to submit issues and pull requests.
## Changelog
### v2.0.0
- Complete TypeScript rewrite
- Added reasoning effort levels
- Added verbosity control
- Implemented response caching
- Enhanced error handling
- Added statistics tracking
- Web search integration
- Multiple tool support
### v1.0.5
- Initial release
- Basic GPT-5 proxy functionality