@berthojoris/mcp-mysql-server
Version:
Model Context Protocol server for MySQL database integration with dynamic per-project permissions, backup/restore, data import/export, and data migration capabilities
1,928 lines (1,592 loc) β’ 103 kB
Markdown
# MySQL MCP Server - Detailed Documentation
This file contains detailed documentation for all features of the MySQL MCP Server. For quick start and basic information, see [README.md](README.md).
---
## Table of Contents
1. [π Category Filtering System](#π-category-filtering-system) - NEW!
2. [DDL Operations](#ποΈ-ddl-operations)
3. [Data Export Tools](#π€-data-export-tools)
4. [Data Import Tools](#π₯-data-import-tools)
5. [Database Backup & Restore](#πΎ-database-backup--restore)
6. [Data Migration Tools](#π-data-migration-tools)
7. [Schema Versioning & Migrations](#π-schema-versioning-and-migrations)
8. [Transaction Management](#π°-transaction-management)
9. [Stored Procedures](#π§-stored-procedures)
10. [Views Management](#ποΈ-views-management)
11. [Triggers Management](#β‘-triggers-management)
12. [Functions Management](#π’-functions-management)
13. [Index Management](#π-index-management)
14. [Constraint Management](#π-constraint-management)
15. [Table Maintenance](#π§-table-maintenance)
16. [Process & Server Management](#π-process--server-management)
17. [Performance Monitoring](#π-performance-monitoring)
18. [Usage Examples](#π-usage-examples)
19. [Query Logging & Automatic SQL Display](#π-query-logging--automatic-sql-display)
20. [Security Features](#π-security-features)
21. [Query Result Caching](#πΎ-query-result-caching)
22. [Query Optimization Hints](#π―-query-optimization-hints)
23. [Bulk Operations](#π-bulk-operations)
24. [OpenAI Codex Integration](#π€-openai-codex-integration)
25. [Troubleshooting](#π οΈ-troubleshooting)
26. [License](#π-license)
27. [Roadmap](#πΊοΈ-roadmap)
---
## π Dual-Layer Filtering System
Control which database operations are available to AI using a **dual-layer filtering system**:
- **Layer 1 (Permissions)**: Broad operation-level control using legacy categories (required)
- **Layer 2 (Categories)**: Fine-grained tool-level filtering using documentation categories (optional)
**Filtering Logic**: `Tool enabled = (Has Permission) AND (Has Category OR No categories specified)`
### Why Use Dual-Layer Filtering?
- **Security**: Multiple layers of protection - broad permissions + specific tool access
- **Flexibility**: Simple permission-only mode OR advanced dual-layer mode
- **Backward Compatible**: Existing single-layer configurations continue to work
- **Granular Control**: 10 permissions Γ 22 categories = precise access control
- **Clear Intent**: Separate "what operations are allowed" from "which specific tools"
### Filtering Modes
| Mode | Configuration | Use Case |
|------|--------------|----------|
| **No Filtering** | No args specified | Development, full trust |
| **Single-Layer** | Permissions only (2nd arg) | Simple, broad control |
| **Dual-Layer** | Permissions + Categories (2nd + 3rd args) | Production, precise control |
### Documentation Categories Reference
```bash
# All 22 available categories (comma-separated):
database_discovery,crud_operations,bulk_operations,custom_queries,
schema_management,utilities,transaction_management,stored_procedures,
views_management,triggers_management,functions_management,index_management,
constraint_management,table_maintenance,server_management,
performance_monitoring,cache_management,query_optimization,
backup_restore,import_export,data_migration,schema_migrations
```
### Configuration Examples
#### Example 1: Single-Layer (Permissions Only) - Backward Compatible
Use only the 2nd argument for broad control:
```json
{
"mcpServers": {
"mysql": {
"command": "node",
"args": [
"/path/to/bin/mcp-mysql.js",
"mysql://user:pass@localhost:3306/db",
"list,read,utility"
]
}
}
}
```
**Result**: All tools within `list`, `read`, and `utility` permissions are enabled.
**Enabled tools**: `list_databases`, `list_tables`, `read_records`, `run_query`, `test_connection`, `export_table_to_csv`, etc.
#### Example 2: Dual-Layer (Permissions + Categories) - Production Read-Only
Use both 2nd argument (permissions) and 3rd argument (categories):
```json
{
"mcpServers": {
"mysql-prod": {
"command": "node",
"args": [
"/path/to/bin/mcp-mysql.js",
"mysql://readonly:pass@prod:3306/app_db",
"list,read,utility",
"database_discovery,performance_monitoring"
]
}
}
}
```
**Layer 1 (Permissions)**: Allows `list`, `read`, `utility` operations
**Layer 2 (Categories)**: Further restricts to `database_discovery` and `performance_monitoring` tools
**Enabled tools**: `list_databases`, `list_tables`, `read_table_schema`, `get_table_relationships`, `get_performance_metrics`, `get_slow_queries`, etc.
**Disabled tools**:
- `read_records` - Has `read` permission but category is `crud_operations` (not allowed)
- `test_connection` - Has `utility` permission but category is `utilities` (not in category list)
- `create_record` - No `create` permission (blocked by Layer 1)
#### Example 3: Development Environment - Single-Layer
Full access using permissions only:
```json
{
"mcpServers": {
"mysql-dev": {
"command": "node",
"args": [
"/path/to/bin/mcp-mysql.js",
"mysql://dev:pass@localhost:3306/dev_db",
"list,read,create,update,delete,ddl,transaction,utility"
]
}
}
}
```
**Result**: All tools within specified permissions are enabled (no category filtering).
#### Example 4: DBA Tasks - Dual-Layer
Schema management and maintenance only:
```json
{
"mcpServers": {
"mysql-dba": {
"command": "node",
"args": [
"/path/to/bin/mcp-mysql.js",
"mysql://dba:pass@server:3306/app_db",
"list,ddl,utility",
"database_discovery,schema_management,table_maintenance,backup_restore,index_management"
]
}
}
}
```
**Enabled**: Schema changes, backups, maintenance - NO data modification.
#### Example 5: Application Backend - Dual-Layer
Data operations without schema changes:
```json
{
"mcpServers": {
"mysql-app": {
"command": "node",
"args": [
"/path/to/bin/mcp-mysql.js",
"mysql://app:pass@localhost:3306/app_db",
"list,read,create,update,delete,transaction,utility",
"crud_operations,bulk_operations,transaction_management,cache_management"
]
}
}
}
```
**Enabled**: Full data CRUD + bulk ops + transactions - NO schema changes (no `ddl` permission).
### Permissions Reference (Layer 1)
| Permission | Operations Allowed | Example Tools |
|------------|-------------------|---------------|
| `list` | List/discover database objects | `list_databases`, `list_tables`, `list_views` |
| `read` | Read data from tables | `read_records`, `run_query` |
| `create` | Insert new records | `create_record`, `bulk_insert` |
| `update` | Update existing records | `update_record`, `bulk_update` |
| `delete` | Delete records | `delete_record`, `bulk_delete` |
| `execute` | Execute custom SQL | `execute_sql`, `run_query` |
| `ddl` | Schema changes | `create_table`, `alter_table`, `drop_table` |
| `utility` | Utility operations | `test_connection`, `analyze_table` |
| `transaction` | Transaction management | `begin_transaction`, `commit_transaction` |
| `procedure` | Stored procedures/functions | `create_stored_procedure`, `execute_function` |
### Categories Reference (Layer 2)
See the full list of 22 documentation categories in the [README.md](README.md#-documentation-categories-recommended).
### How Filtering Works
The system uses both arguments to determine access:
**Argument positions**:
- **2nd argument**: Permissions (Layer 1) - comma-separated legacy categories
- **3rd argument**: Categories (Layer 2, optional) - comma-separated documentation categories
**Decision logic**:
1. If no arguments: All 119 tools enabled
2. If only 2nd argument (permissions): Tools enabled if they match permission
3. If both arguments: Tools enabled if they match BOTH permission AND category
**Example**:
```bash
# Tool: bulk_insert
# Permission required: create
# Category required: bulk_operations
# Single-layer (permissions only)
args: ["mysql://...", "list,create,read"]
Result: β
Enabled (has 'create' permission)
# Dual-layer (permissions + categories)
args: ["mysql://...", "list,create,read", "database_discovery,crud_operations"]
Result: β Disabled (has 'create' but category is 'bulk_operations', not in list)
# Dual-layer with correct category
args: ["mysql://...", "list,create,read", "bulk_operations,crud_operations"]
Result: β
Enabled (has both 'create' permission AND 'bulk_operations' category)
```
### Troubleshooting Filters
If a tool is not available, check the error message which tells you which layer blocked it:
**Layer 1 (Permission) error**:
```
Permission denied: This tool requires 'create' permission (Layer 1).
Your current permissions: list,read,utility.
Add 'create' to the permissions argument.
```
**Layer 2 (Category) error**:
```
Permission denied: This tool requires 'bulk_operations' category (Layer 2).
Your current categories: database_discovery,crud_operations.
Add 'bulk_operations' to the categories argument.
```
---
## ποΈ DDL Operations
DDL (Data Definition Language) operations allow AI to create, modify, and delete tables.
### β οΈ Enable DDL with Caution
DDL operations are **disabled by default** for safety. Add `ddl` to permissions to enable:
```json
{
"args": [
"mysql://user:pass@localhost:3306/db",
"list,read,create,update,delete,ddl,utility"
]
}
```
### DDL Tool Examples
#### Create Table
**User prompt:** *"Create a users table with id, username, email, and created_at"*
**AI will execute:**
```json
{
"tool": "create_table",
"arguments": {
"table_name": "users",
"columns": [
{"name": "id", "type": "INT", "primary_key": true, "auto_increment": true},
{"name": "username", "type": "VARCHAR(255)", "nullable": false},
{"name": "email", "type": "VARCHAR(255)", "nullable": false},
{"name": "created_at", "type": "DATETIME", "default": "CURRENT_TIMESTAMP"}
]
}
}
```
#### Alter Table
**User prompt:** *"Add a phone column to the users table"*
**AI will execute:**
```json
{
"tool": "alter_table",
"arguments": {
"table_name": "users",
"operations": [
{
"type": "add_column",
"column_name": "phone",
"column_type": "VARCHAR(20)",
"nullable": true
}
]
}
}
```
#### Drop Table
**User prompt:** *"Drop the temp_data table"*
**AI will execute:**
```json
{
"tool": "drop_table",
"arguments": {
"table_name": "temp_data",
"if_exists": true
}
}
```
### DDL Safety Guidelines
1. β
**Enable only in development** - Keep DDL disabled for production
2. β
**Backup before major changes** - DDL operations are usually irreversible
3. β
**Test in dev first** - Try schema changes in development environment
4. β
**Use proper MySQL user permissions** - Grant only necessary privileges
---
## π€ Data Export Tools
The MySQL MCP Server provides powerful data export capabilities, allowing AI agents to export database content in CSV format for analysis, reporting, and data sharing.
### Data Export Tools Overview
- **`export_table_to_csv`** - Export all or filtered data from a table to CSV format
- **`export_query_to_csv`** - Export the results of a custom SELECT query to CSV format
Both tools support:
- Filtering data with conditions
- Pagination for large datasets
- Sorting results
- Optional column headers
- Proper CSV escaping for special characters
### Data Export Tool Examples
#### Export Table to CSV
**User prompt:** *"Export the first 100 users ordered by registration date to CSV"*
**AI will execute:**
```json
{
"tool": "export_table_to_csv",
"arguments": {
"table_name": "users",
"sorting": {
"field": "registration_date",
"direction": "desc"
},
"pagination": {
"page": 1,
"limit": 100
},
"include_headers": true
}
}
```
#### Export Filtered Data to CSV
**User prompt:** *"Export all users from the marketing department to CSV"*
**AI will execute:**
```json
{
"tool": "export_table_to_csv",
"arguments": {
"table_name": "users",
"filters": [
{
"field": "department",
"operator": "eq",
"value": "marketing"
}
],
"include_headers": true
}
}
```
#### Export Query Results to CSV
**User prompt:** *"Export a report of total sales by product category to CSV"*
**AI will execute:**
```json
{
"tool": "export_query_to_csv",
"arguments": {
"query": "SELECT category, SUM(sales_amount) as total_sales FROM sales GROUP BY category ORDER BY total_sales DESC",
"include_headers": true
}
}
```
### Data Export Best Practices
1. β
**Use filtering** - Export only the data you need to reduce file size
2. β
**Implement pagination** - For large datasets, use pagination to avoid memory issues
3. β
**Include headers** - Make CSV files more understandable with column headers
4. β
**Test with small datasets first** - Verify export format before processing large amounts of data
5. β
**Use proper permissions** - Data export tools require `utility` permission
### Common Data Export Patterns
**Pattern 1: Simple Table Export**
```json
{
"tool": "export_table_to_csv",
"arguments": {
"table_name": "products",
"include_headers": true
}
}
```
**Pattern 2: Filtered and Sorted Export**
```json
{
"tool": "export_table_to_csv",
"arguments": {
"table_name": "orders",
"filters": [
{
"field": "order_date",
"operator": "gte",
"value": "2023-01-01"
}
],
"sorting": {
"field": "order_date",
"direction": "desc"
},
"include_headers": true
}
}
```
**Pattern 3: Complex Query Export**
```json
{
"tool": "export_query_to_csv",
"arguments": {
"query": "SELECT u.name, u.email, COUNT(o.id) as order_count FROM users u LEFT JOIN orders o ON u.id = o.user_id GROUP BY u.id HAVING order_count > 5",
"include_headers": true
}
}
```
---
## π₯ Data Import Tools
The MySQL MCP Server provides tools to import data from various formats into your database tables.
### Data Import Tools Overview
| Tool | Description | Permission |
|------|-------------|------------|
| `import_from_csv` | Import data from CSV string | `create` |
| `import_from_json` | Import data from JSON array | `create` |
### Import from CSV
Import data from a CSV string into a table with optional column mapping and error handling.
```json
{
"tool": "import_from_csv",
"arguments": {
"table_name": "users",
"csv_data": "name,email,age\nJohn,john@example.com,30\nJane,jane@example.com,25",
"has_headers": true,
"skip_errors": false,
"batch_size": 100
}
}
```
**With Column Mapping:**
```json
{
"tool": "import_from_csv",
"arguments": {
"table_name": "users",
"csv_data": "full_name,mail\nJohn Doe,john@example.com",
"has_headers": true,
"column_mapping": {
"full_name": "name",
"mail": "email"
}
}
}
```
### Import from JSON
Import data from a JSON array string into a table.
```json
{
"tool": "import_from_json",
"arguments": {
"table_name": "products",
"json_data": "[{\"name\":\"Widget\",\"price\":9.99},{\"name\":\"Gadget\",\"price\":19.99}]",
"skip_errors": false,
"batch_size": 100
}
}
```
### Import Response
```json
{
"status": "success",
"data": {
"message": "Import completed successfully",
"rows_imported": 150,
"rows_failed": 0
}
}
```
### Import Best Practices
1. **Validate data format** - Ensure CSV/JSON is well-formed before importing
2. **Use batch_size** - Adjust batch size for optimal performance (default: 100)
3. **Enable skip_errors** - For large imports, set `skip_errors: true` to continue on individual row failures
4. **Column mapping** - Use when source column names don't match table columns
---
## πΎ Database Backup & Restore
Enterprise-grade backup and restore functionality for MySQL databases.
### Backup & Restore Tools Overview
| Tool | Description | Permission |
|------|-------------|------------|
| `backup_table` | Backup single table to SQL dump | `utility` |
| `backup_database` | Backup entire database to SQL dump | `utility` |
| `restore_from_sql` | Restore from SQL dump content | `ddl` |
| `get_create_table_statement` | Get CREATE TABLE statement | `list` |
| `get_database_schema` | Get complete database schema | `list` |
### Backup Single Table
```json
{
"tool": "backup_table",
"arguments": {
"table_name": "users",
"include_data": true,
"include_drop": true
}
}
```
**Response:**
```json
{
"status": "success",
"data": {
"table_name": "users",
"sql_dump": "-- MySQL Dump...\nDROP TABLE IF EXISTS `users`;\nCREATE TABLE...\nINSERT INTO...",
"row_count": 1500,
"include_data": true,
"include_drop": true
}
}
```
### Backup Entire Database
```json
{
"tool": "backup_database",
"arguments": {
"include_data": true,
"include_drop": true
}
}
```
**Backup Specific Tables:**
```json
{
"tool": "backup_database",
"arguments": {
"tables": ["users", "orders", "products"],
"include_data": true
}
}
```
### Restore from SQL Dump
```json
{
"tool": "restore_from_sql",
"arguments": {
"sql_dump": "DROP TABLE IF EXISTS `users`;\nCREATE TABLE `users` (...);",
"stop_on_error": true
}
}
```
**Response:**
```json
{
"status": "success",
"data": {
"message": "Restore completed successfully",
"statements_executed": 25,
"statements_failed": 0
}
}
```
### Get Database Schema
Get a complete overview of all database objects:
```json
{
"tool": "get_database_schema",
"arguments": {
"include_views": true,
"include_procedures": true,
"include_functions": true,
"include_triggers": true
}
}
```
### Export to JSON Format
```json
{
"tool": "export_table_to_json",
"arguments": {
"table_name": "users",
"pretty": true,
"filters": [
{ "field": "status", "operator": "eq", "value": "active" }
]
}
}
```
### Export to SQL INSERT Statements
```json
{
"tool": "export_table_to_sql",
"arguments": {
"table_name": "products",
"include_create_table": true,
"batch_size": 100
}
}
```
### Backup Best Practices
1. **Regular backups** - Schedule regular database backups
2. **Test restores** - Periodically test your backup restoration process
3. **Include structure** - Always include `include_drop: true` for clean restores
4. **Schema-only backups** - Use `include_data: false` for structure-only backups
5. **Selective backups** - Use `tables` array to backup only critical tables
### Backup Safety Features
- **Transactional integrity** - Backups include transaction markers
- **Foreign key handling** - `SET FOREIGN_KEY_CHECKS=0` included in dumps
- **Binary data support** - Proper escaping for BLOB and binary columns
- **Character encoding** - UTF-8 encoding preserved in exports
---
## π Data Migration Tools
The MySQL MCP Server provides powerful data migration utilities for copying, moving, and synchronizing data between tables.
### Data Migration Tools Overview
| Tool | Description | Permission |
|------|-------------|------------|
| `copy_table_data` | Copy data from one table to another | `create` |
| `move_table_data` | Move data (copy + delete from source) | `create`, `delete` |
| `clone_table` | Clone table structure with optional data | `ddl` |
| `compare_table_structure` | Compare structure of two tables | `list` |
| `sync_table_data` | Synchronize data between tables | `update` |
### Copy Table Data
Copy data from one table to another with optional column mapping and filtering.
```json
{
"tool": "copy_table_data",
"arguments": {
"source_table": "users",
"target_table": "users_backup",
"batch_size": 1000
}
}
```
**With Column Mapping:**
```json
{
"tool": "copy_table_data",
"arguments": {
"source_table": "old_customers",
"target_table": "customers",
"column_mapping": {
"customer_name": "name",
"customer_email": "email",
"customer_phone": "phone"
}
}
}
```
**With Filters:**
```json
{
"tool": "copy_table_data",
"arguments": {
"source_table": "orders",
"target_table": "archived_orders",
"filters": [
{ "field": "status", "operator": "eq", "value": "completed" },
{ "field": "created_at", "operator": "lt", "value": "2024-01-01" }
]
}
}
```
**Response:**
```json
{
"status": "success",
"data": {
"message": "Data copied successfully",
"rows_copied": 5000,
"source_table": "orders",
"target_table": "archived_orders"
}
}
```
### Move Table Data
Move data from one table to another (copies data then deletes from source).
```json
{
"tool": "move_table_data",
"arguments": {
"source_table": "active_sessions",
"target_table": "expired_sessions",
"filters": [
{ "field": "expires_at", "operator": "lt", "value": "2024-01-01" }
]
}
}
```
**Response:**
```json
{
"status": "success",
"data": {
"message": "Data moved successfully",
"rows_moved": 1500,
"source_table": "active_sessions",
"target_table": "expired_sessions"
}
}
```
### Clone Table
Clone a table structure with or without data.
```json
{
"tool": "clone_table",
"arguments": {
"source_table": "products",
"new_table_name": "products_staging",
"include_data": false,
"include_indexes": true
}
}
```
**Clone with Data:**
```json
{
"tool": "clone_table",
"arguments": {
"source_table": "users",
"new_table_name": "users_test",
"include_data": true
}
}
```
**Response:**
```json
{
"status": "success",
"data": {
"message": "Table cloned successfully",
"source_table": "products",
"new_table": "products_staging",
"include_data": false,
"include_indexes": true
}
}
```
### Compare Table Structure
Compare the structure of two tables to identify differences.
```json
{
"tool": "compare_table_structure",
"arguments": {
"table1": "users",
"table2": "users_backup"
}
}
```
**Response:**
```json
{
"status": "success",
"data": {
"table1": "users",
"table2": "users_backup",
"identical": false,
"differences": {
"columns_only_in_table1": ["last_login", "avatar_url"],
"columns_only_in_table2": [],
"column_type_differences": [
{
"column": "email",
"table1_type": "VARCHAR(255)",
"table2_type": "VARCHAR(100)"
}
],
"index_differences": {
"only_in_table1": ["idx_last_login"],
"only_in_table2": []
}
}
}
}
```
### Sync Table Data
Synchronize data between two tables based on a key column. Supports three modes:
- **insert_only**: Only insert new records that don't exist in target
- **update_only**: Only update existing records in target
- **upsert**: Both insert new and update existing records (default)
```json
{
"tool": "sync_table_data",
"arguments": {
"source_table": "products_master",
"target_table": "products_replica",
"key_column": "product_id",
"sync_mode": "upsert"
}
}
```
**Sync Specific Columns:**
```json
{
"tool": "sync_table_data",
"arguments": {
"source_table": "inventory_main",
"target_table": "inventory_cache",
"key_column": "sku",
"columns_to_sync": ["quantity", "price", "updated_at"],
"sync_mode": "update_only"
}
}
```
**Response:**
```json
{
"status": "success",
"data": {
"message": "Sync completed successfully",
"source_table": "products_master",
"target_table": "products_replica",
"rows_inserted": 150,
"rows_updated": 3200,
"sync_mode": "upsert"
}
}
```
### Migration Best Practices
1. **Backup before migration** - Always backup target tables before large migrations
2. **Use filters** - Migrate data in chunks using filters to avoid timeouts
3. **Test with small batches** - Test migration logic with small datasets first
4. **Verify data integrity** - Use `compare_table_structure` before migration
5. **Monitor performance** - Adjust `batch_size` based on table size and server capacity
### Common Migration Patterns
**Pattern 1: Archive Old Data**
```json
// Move old orders to archive table
{
"tool": "move_table_data",
"arguments": {
"source_table": "orders",
"target_table": "orders_archive",
"filters": [
{ "field": "created_at", "operator": "lt", "value": "2023-01-01" }
]
}
}
```
**Pattern 2: Create Staging Table**
```json
// Clone structure for staging
{
"tool": "clone_table",
"arguments": {
"source_table": "products",
"new_table_name": "products_staging",
"include_data": false
}
}
```
**Pattern 3: Replicate Data Across Tables**
```json
// Keep replica in sync with master
{
"tool": "sync_table_data",
"arguments": {
"source_table": "users_master",
"target_table": "users_read_replica",
"key_column": "id",
"sync_mode": "upsert"
}
}
```
---
## π Schema Versioning and Migrations
The MySQL MCP Server provides comprehensive schema versioning and migration tools for managing database schema changes in a controlled, trackable manner. This feature enables version control for your database schema with support for applying and rolling back migrations.
### Schema Versioning Tools Overview
| Tool | Description | Permission |
|------|-------------|------------|
| `init_migrations_table` | Initialize the migrations tracking table | ddl |
| `create_migration` | Create a new migration entry | ddl |
| `apply_migrations` | Apply pending migrations | ddl |
| `rollback_migration` | Rollback applied migrations | ddl |
| `get_migration_status` | Get migration history and status | list |
| `get_schema_version` | Get current schema version | list |
| `validate_migrations` | Validate migrations for issues | list |
| `reset_failed_migration` | Reset a failed migration to pending | ddl |
| `generate_migration_from_diff` | Generate migration from table comparison | ddl |
### β οΈ Enable Schema Versioning
Schema versioning operations require `ddl` permission:
```json
"args": [
"--mysql-host", "localhost",
"--mysql-user", "root",
"--mysql-password", "password",
"--mysql-database", "mydb",
"--permissions", "list,read,create,update,delete,ddl"
]
```
### Initialize Migrations Table
Before using migrations, initialize the tracking table:
```json
{
"tool": "init_migrations_table",
"arguments": {}
}
```
**Response:**
```json
{
"status": "success",
"data": {
"message": "Migrations table '_schema_migrations' initialized successfully",
"table_name": "_schema_migrations"
}
}
```
### Creating Migrations
Create a migration with up and down SQL:
```json
{
"tool": "create_migration",
"arguments": {
"name": "add_users_table",
"description": "Create the users table with basic fields",
"up_sql": "CREATE TABLE users (id INT AUTO_INCREMENT PRIMARY KEY, email VARCHAR(255) NOT NULL UNIQUE, name VARCHAR(100), created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP);",
"down_sql": "DROP TABLE IF EXISTS users;"
}
}
```
**Response:**
```json
{
"status": "success",
"data": {
"message": "Migration 'add_users_table' created successfully",
"version": "20240115120000",
"name": "add_users_table",
"checksum": "a1b2c3d4",
"status": "pending"
}
}
```
#### Multi-Statement Migrations
Migrations can contain multiple SQL statements separated by semicolons:
```json
{
"tool": "create_migration",
"arguments": {
"name": "add_orders_and_items",
"up_sql": "CREATE TABLE orders (id INT AUTO_INCREMENT PRIMARY KEY, user_id INT, total DECIMAL(10,2), created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP); CREATE TABLE order_items (id INT AUTO_INCREMENT PRIMARY KEY, order_id INT, product_id INT, quantity INT, price DECIMAL(10,2)); ALTER TABLE order_items ADD CONSTRAINT fk_order FOREIGN KEY (order_id) REFERENCES orders(id);",
"down_sql": "DROP TABLE IF EXISTS order_items; DROP TABLE IF EXISTS orders;"
}
}
```
### Applying Migrations
Apply all pending migrations:
```json
{
"tool": "apply_migrations",
"arguments": {}
}
```
**Response:**
```json
{
"status": "success",
"data": {
"message": "Successfully applied 3 migration(s)",
"applied_count": 3,
"failed_count": 0,
"applied_migrations": [
{"version": "20240115120000", "name": "add_users_table", "execution_time_ms": 45},
{"version": "20240115130000", "name": "add_orders_table", "execution_time_ms": 32},
{"version": "20240115140000", "name": "add_products_table", "execution_time_ms": 28}
]
}
}
```
#### Apply to Specific Version
```json
{
"tool": "apply_migrations",
"arguments": {
"target_version": "20240115130000"
}
}
```
#### Dry Run Mode
Preview migrations without executing:
```json
{
"tool": "apply_migrations",
"arguments": {
"dry_run": true
}
}
```
### Rolling Back Migrations
Rollback the last migration:
```json
{
"tool": "rollback_migration",
"arguments": {
"steps": 1
}
}
```
Rollback multiple migrations:
```json
{
"tool": "rollback_migration",
"arguments": {
"steps": 3
}
}
```
Rollback to a specific version (exclusive):
```json
{
"tool": "rollback_migration",
"arguments": {
"target_version": "20240115120000"
}
}
```
**Response:**
```json
{
"status": "success",
"data": {
"message": "Successfully rolled back 2 migration(s)",
"rolled_back_count": 2,
"failed_count": 0,
"rolled_back_migrations": [
{"version": "20240115140000", "name": "add_products_table", "execution_time_ms": 15},
{"version": "20240115130000", "name": "add_orders_table", "execution_time_ms": 12}
]
}
}
```
### Getting Schema Version
Check the current schema version:
```json
{
"tool": "get_schema_version",
"arguments": {}
}
```
**Response:**
```json
{
"status": "success",
"data": {
"current_version": "20240115140000",
"current_migration_name": "add_products_table",
"applied_at": "2024-01-15T14:30:00.000Z",
"pending_migrations": 2,
"migrations_table_exists": true
}
}
```
### Getting Migration Status
View migration history with status:
```json
{
"tool": "get_migration_status",
"arguments": {
"limit": 10
}
}
```
Filter by status:
```json
{
"tool": "get_migration_status",
"arguments": {
"status": "failed"
}
}
```
**Response:**
```json
{
"status": "success",
"data": {
"current_version": "20240115140000",
"summary": {
"total": 5,
"pending": 1,
"applied": 3,
"failed": 1,
"rolled_back": 0
},
"migrations": [
{
"id": 5,
"version": "20240115150000",
"name": "add_analytics_table",
"status": "pending",
"applied_at": null,
"execution_time_ms": null
},
{
"id": 4,
"version": "20240115140000",
"name": "add_products_table",
"status": "applied",
"applied_at": "2024-01-15T14:30:00.000Z",
"execution_time_ms": 28
}
]
}
}
```
### Validating Migrations
Check migrations for potential issues:
```json
{
"tool": "validate_migrations",
"arguments": {}
}
```
**Response:**
```json
{
"status": "success",
"data": {
"valid": false,
"total_migrations": 5,
"issues_count": 1,
"warnings_count": 2,
"issues": [
{
"type": "checksum_mismatch",
"version": "20240115120000",
"name": "add_users_table",
"message": "Migration 'add_users_table' checksum mismatch - migration may have been modified after being applied"
}
],
"warnings": [
{
"type": "missing_down_sql",
"version": "20240115150000",
"name": "add_analytics_table",
"message": "Migration 'add_analytics_table' has no down_sql - rollback will not be possible"
},
{
"type": "blocked_migrations",
"message": "1 pending migration(s) are blocked by failed migration 'add_audit_table'"
}
]
}
}
```
### Resetting Failed Migrations
Reset a failed migration to try again:
```json
{
"tool": "reset_failed_migration",
"arguments": {
"version": "20240115145000"
}
}
```
**Response:**
```json
{
"status": "success",
"data": {
"message": "Migration 'add_audit_table' (20240115145000) has been reset to pending status",
"version": "20240115145000",
"name": "add_audit_table",
"previous_status": "failed",
"new_status": "pending"
}
}
```
### Generating Migrations from Table Differences
Automatically generate a migration by comparing two table structures:
```json
{
"tool": "generate_migration_from_diff",
"arguments": {
"table1": "users_v2",
"table2": "users",
"migration_name": "update_users_to_v2"
}
}
```
**Response:**
```json
{
"status": "success",
"data": {
"message": "Migration 'update_users_to_v2' generated with 3 change(s)",
"version": "20240115160000",
"changes_count": 3,
"up_sql": "ALTER TABLE `users` ADD COLUMN `phone` VARCHAR(20) NULL;\nALTER TABLE `users` ADD COLUMN `avatar_url` VARCHAR(500) NULL;\nALTER TABLE `users` MODIFY COLUMN `name` VARCHAR(200) NOT NULL;",
"down_sql": "ALTER TABLE `users` DROP COLUMN `phone`;\nALTER TABLE `users` DROP COLUMN `avatar_url`;\nALTER TABLE `users` MODIFY COLUMN `name` VARCHAR(100) NULL;",
"source_table": "users_v2",
"target_table": "users"
}
}
```
### Migration Best Practices
1. **Always include down_sql**: Enable rollback capability for all migrations
2. **Test migrations first**: Use `dry_run: true` to preview changes
3. **Validate before applying**: Run `validate_migrations` to check for issues
4. **Use descriptive names**: Make migration names clear and meaningful
5. **Keep migrations small**: One logical change per migration
6. **Version control migrations**: Store migration SQL in your VCS
7. **Never modify applied migrations**: Create new migrations for changes
8. **Backup before migrating**: Always backup production databases first
### Common Migration Patterns
#### Adding a Column
```json
{
"tool": "create_migration",
"arguments": {
"name": "add_user_phone",
"up_sql": "ALTER TABLE users ADD COLUMN phone VARCHAR(20) NULL AFTER email;",
"down_sql": "ALTER TABLE users DROP COLUMN phone;"
}
}
```
#### Adding an Index
```json
{
"tool": "create_migration",
"arguments": {
"name": "add_email_index",
"up_sql": "CREATE INDEX idx_users_email ON users(email);",
"down_sql": "DROP INDEX idx_users_email ON users;"
}
}
```
#### Renaming a Column
```json
{
"tool": "create_migration",
"arguments": {
"name": "rename_user_name_to_full_name",
"up_sql": "ALTER TABLE users CHANGE COLUMN name full_name VARCHAR(100);",
"down_sql": "ALTER TABLE users CHANGE COLUMN full_name name VARCHAR(100);"
}
}
```
#### Adding Foreign Key
```json
{
"tool": "create_migration",
"arguments": {
"name": "add_orders_user_fk",
"up_sql": "ALTER TABLE orders ADD CONSTRAINT fk_orders_user FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;",
"down_sql": "ALTER TABLE orders DROP FOREIGN KEY fk_orders_user;"
}
}
```
### Migration Table Schema
The `_schema_migrations` table stores all migration information:
| Column | Type | Description |
|--------|------|-------------|
| id | INT | Auto-increment primary key |
| version | VARCHAR(14) | Migration version (timestamp-based) |
| name | VARCHAR(255) | Migration name |
| description | TEXT | Optional description |
| up_sql | LONGTEXT | SQL to apply migration |
| down_sql | LONGTEXT | SQL to rollback migration |
| checksum | VARCHAR(64) | Checksum of up_sql for integrity |
| applied_at | TIMESTAMP | When migration was applied |
| applied_by | VARCHAR(255) | User who applied migration |
| execution_time_ms | INT | Execution time in milliseconds |
| status | ENUM | pending, applied, failed, rolled_back |
| error_message | TEXT | Error message if failed |
| created_at | TIMESTAMP | When migration was created |
---
## π° Transaction Management
The MySQL MCP Server provides full ACID transaction support, allowing you to group multiple database operations into atomic units.
### Transaction Tools Overview
- **`begin_transaction`** - Start a new transaction
- **`execute_in_transaction`** - Execute SQL within transaction context
- **`commit_transaction`** - Permanently save all changes
- **`rollback_transaction`** - Discard all changes since transaction start
- **`get_transaction_status`** - Check if transaction is active
### Transaction Example: Money Transfer
**User:** *"Transfer $100 from Alice's account to Bob's account"*
**AI executes:**
```json
// Step 1: Begin transaction
{
"tool": "begin_transaction"
}
// Step 2: Deduct from Alice's account
{
"tool": "execute_in_transaction",
"arguments": {
"sql": "UPDATE accounts SET balance = balance - 100 WHERE name = 'Alice'"
}
}
// Step 3: Add to Bob's account
{
"tool": "execute_in_transaction",
"arguments": {
"sql": "UPDATE accounts SET balance = balance + 100 WHERE name = 'Bob'"
}
}
// Step 4: Verify both accounts exist and have sufficient funds
{
"tool": "execute_in_transaction",
"arguments": {
"sql": "SELECT * FROM accounts WHERE name IN ('Alice', 'Bob')"
}
}
// Step 5: Commit if everything is valid
{
"tool": "commit_transaction"
}
```
### Transaction Safety Features
1. β
**Atomic Operations** - All operations succeed or all fail together
2. β
**Automatic Rollback** - If any operation fails, transaction automatically rolls back
3. β
**Isolation** - Other sessions see changes only after commit
4. β
**Status Checking** - Always know if a transaction is active
5. β
**Error Handling** - Comprehensive error reporting for failed operations
### Transaction Best Practices
1. **Keep transactions short** - Long transactions can block other operations
2. **Always commit or rollback** - Don't leave transactions hanging
3. **Test transaction logic** - Verify your transaction sequence works correctly
4. **Handle errors gracefully** - Check for errors after each operation
5. **Use appropriate isolation levels** - Understand your consistency requirements
### Common Transaction Patterns
**Pattern 1: Safe Update with Verification**
```json
// Begin transaction
// Update records
// Verify changes with SELECT
// Commit if valid, rollback if not
```
**Pattern 2: Batch Operations**
```json
// Begin transaction
// Insert multiple related records
// Update related tables
// Commit all changes together
```
**Pattern 3: Error Recovery**
```json
// Begin transaction
// Try operations
// If error occurs: rollback
// If success: commit
```
---
## π§ Stored Procedures
The MySQL MCP Server provides comprehensive stored procedure management, allowing you to create, execute, and manage stored procedures with full parameter support.
### Stored Procedure Tools Overview
- **`list_stored_procedures`** - List all stored procedures in a database
- **`create_stored_procedure`** - Create new stored procedures with IN/OUT/INOUT parameters
- **`get_stored_procedure_info`** - Get detailed information about parameters and metadata
- **`execute_stored_procedure`** - Execute procedures with automatic parameter handling
- **`drop_stored_procedure`** - Delete stored procedures safely
### β οΈ Enable Stored Procedures
Stored procedure operations require the `procedure` permission. Add it to your configuration:
```json
{
"args": [
"mysql://user:pass@localhost:3306/db",
"list,read,procedure,utility" // β Include 'procedure'
]
}
```
### Creating Stored Procedures
**User:** *"Create a stored procedure that calculates tax for a given amount"*
**AI will execute:**
```json
{
"tool": "create_stored_procedure",
"arguments": {
"procedure_name": "calculate_tax",
"parameters": [
{
"name": "amount",
"mode": "IN",
"data_type": "DECIMAL(10,2)"
},
{
"name": "tax_rate",
"mode": "IN",
"data_type": "DECIMAL(5,4)"
},
{
"name": "tax_amount",
"mode": "OUT",
"data_type": "DECIMAL(10,2)"
}
],
"body": "SET tax_amount = amount * tax_rate;",
"comment": "Calculate tax amount based on amount and tax rate"
}
}
```
### Executing Stored Procedures
**User:** *"Calculate tax for $1000 with 8.5% tax rate"*
**AI will execute:**
```json
{
"tool": "execute_stored_procedure",
"arguments": {
"procedure_name": "calculate_tax",
"parameters": [1000.00, 0.085]
}
}
```
**Result:**
```json
{
"status": "success",
"data": {
"results": { /* execution results */ },
"outputParameters": {
"tax_amount": 85.00
}
}
}
```
### Parameter Types
**IN Parameters** - Input values passed to the procedure
```sql
IN user_id INT
IN email VARCHAR(255)
```
**OUT Parameters** - Output values returned by the procedure
```sql
OUT total_count INT
OUT average_score DECIMAL(5,2)
```
**INOUT Parameters** - Values that are both input and output
```sql
INOUT running_total DECIMAL(10,2)
```
### Complex Stored Procedure Example
**User:** *"Create a procedure to process an order with inventory check"*
```json
{
"tool": "create_stored_procedure",
"arguments": {
"procedure_name": "process_order",
"parameters": [
{ "name": "product_id", "mode": "IN", "data_type": "INT" },
{ "name": "quantity", "mode": "IN", "data_type": "INT" },
{ "name": "customer_id", "mode": "IN", "data_type": "INT" },
{ "name": "order_id", "mode": "OUT", "data_type": "INT" },
{ "name": "success", "mode": "OUT", "data_type": "BOOLEAN" }
],
"body": "DECLARE available_qty INT; SELECT stock_quantity INTO available_qty FROM products WHERE id = product_id; IF available_qty >= quantity THEN INSERT INTO orders (customer_id, product_id, quantity) VALUES (customer_id, product_id, quantity); SET order_id = LAST_INSERT_ID(); UPDATE products SET stock_quantity = stock_quantity - quantity WHERE id = product_id; SET success = TRUE; ELSE SET order_id = 0; SET success = FALSE; END IF;",
"comment": "Process order with inventory validation"
}
}
```
### Getting Procedure Information
**User:** *"Show me details about the calculate_tax procedure"*
**AI will execute:**
```json
{
"tool": "get_stored_procedure_info",
"arguments": {
"procedure_name": "calculate_tax"
}
}
```
**Returns detailed information:**
- Procedure metadata (created date, security type, etc.)
- Parameter details (names, types, modes)
- Procedure definition
- Comments and documentation
### Stored Procedure Best Practices
1. β
**Use descriptive names** - Make procedure purposes clear
2. β
**Document with comments** - Add meaningful comments to procedures
3. β
**Validate inputs** - Check parameter values within procedures
4. β
**Handle errors** - Use proper error handling in procedure bodies
5. β
**Test thoroughly** - Verify procedures work with various inputs
6. β
**Use appropriate data types** - Choose correct types for parameters
7. β
**Consider security** - Be mindful of SQL injection in dynamic SQL
### Common Stored Procedure Patterns
**Pattern 1: Data Validation and Processing**
```sql
-- Validate input, process if valid, return status
IF input_value > 0 THEN
-- Process data
SET success = TRUE;
ELSE
SET success = FALSE;
END IF;
```
**Pattern 2: Complex Business Logic**
```sql
-- Multi-step business process
-- Step 1: Validate
-- Step 2: Calculate
-- Step 3: Update multiple tables
-- Step 4: Return results
```
**Pattern 3: Reporting and Analytics**
```sql
-- Aggregate data from multiple tables
-- Apply business rules
-- Return calculated results
```
---
## ποΈ Views Management
Views allow you to create virtual tables based on SQL SELECT statements. The MySQL MCP Server provides comprehensive view management tools.
### View Tools Overview
- **`list_views`** - List all views in the database
- **`get_view_info`** - Get detailed information about a view including columns
- **`create_view`** - Create a new view with SELECT definition
- **`alter_view`** - Alter an existing view definition
- **`drop_view`** - Drop a view
- **`show_create_view`** - Show the CREATE statement for a view
### Creating Views
**User:** *"Create a view that shows active users with their order count"*
**AI will execute:**
```json
{
"tool": "create_view",
"arguments": {
"view_name": "active_users_orders",
"definition": "SELECT u.id, u.name, u.email, COUNT(o.id) as order_count FROM users u LEFT JOIN orders o ON u.id = o.user_id WHERE u.status = 'active' GROUP BY u.id",
"or_replace": true
}
}
```
### View Options
| Option | Description |
|--------|-------------|
| `or_replace` | If true, replaces existing view with same name |
| `algorithm` | UNDEFINED, MERGE, or TEMPTABLE |
| `security` | DEFINER or INVOKER |
| `check_option` | CASCADED or LOCAL for updatable views |
---
## β‘ Triggers Management
Triggers are database callbacks that automatically execute when specific events occur on a table.
### Trigger Tools Overview
- **`list_triggers`** - List all triggers, optionally filtered by table
- **`get_trigger_info`** - Get detailed information about a trigger
- **`create_trigger`** - Create a new trigger
- **`drop_trigger`** - Drop a trigger
- **`show_create_trigger`** - Show the CREATE statement for a trigger
### Creating Triggers
**User:** *"Create a trigger that logs all updates to the users table"*
**AI will execute:**
```json
{
"tool": "create_trigger",
"arguments": {
"trigger_name": "users_update_log",
"table_name": "users",
"timing": "AFTER",
"event": "UPDATE",
"body": "INSERT INTO audit_log (table_name, action, record_id, changed_at) VALUES ('users', 'UPDATE', NEW.id, NOW());"
}
}
```
### Trigger Timing and Events
| Timing | Events | Description |
|--------|--------|-------------|
| BEFORE | INSERT, UPDATE, DELETE | Execute before the operation |
| AFTER | INSERT, UPDATE, DELETE | Execute after the operation |
---
## π’ Functions Management
User-defined functions (UDFs) allow you to create reusable SQL functions that can be called in queries.
### Function Tools Overview
- **`list_functions`** - List all user-defined functions
- **`get_function_info`** - Get detailed information about a function
- **`create_function`** - Create a new function
- **`drop_function`** - Drop a function
- **`show_create_function`** - Show the CREATE statement
- **`execute_function`** - Execute a function and return its result
### Creating Functions
**User:** *"Create a function that calculates the discount price"*
**AI will execute:**
```json
{
"tool": "create_function",
"arguments": {
"function_name": "calculate_discount",
"parameters": [
{"name": "price", "data_type": "DECIMAL(10,2)"},
{"name": "discount_percent", "data_type": "INT"}
],
"returns": "DECIMAL(10,2)",
"body": "RETURN price - (price * discount_percent / 100);",
"deterministic": true,
"comment": "Calculate discounted price"
}
}
```
### Executing Functions
```json
{
"tool": "execute_function",
"arguments": {
"function_name": "calculate_discount",
"parameters": [100.00, 15]
}
}
```
**Returns:** `{"result": 85.00}`
---
## π Index Management
Indexes improve query performance by allowing MySQL to find rows faster.
### Index Tools Overview
- **`list_indexes`** - List all indexes for a table
- **`get_index_info`** - Get detailed information about an index
- **`create_index`** - Create a new index
- **`drop_index`** - Drop an index
- **`analyze_index`** - Update index statistics
### Creating Indexes
**User:** *"Create an index on the email column of the users table"*
**AI will execute:**
```json
{
"tool": "create_index",
"arguments": {
"table_name": "users",
"index_name": "idx_users_email",
"columns": ["email"],
"unique": true
}
}
```
### Index Types
| Type | Description | Use Case |
|------|-------------|----------|
| BTREE | Default