mssql-performance-mcp
Version:
MCP server for SQL Server performance tuning and optimization. Provides tools for analyzing slow queries, execution plans, index fragmentation, missing indexes, and more.
412 lines (316 loc) • 10.1 kB
Markdown
# MSSQL Performance MCP - Usage Examples
This document provides real-world usage examples for the MSSQL Performance MCP server with Claude Desktop.
## 🎯 Getting Started
After setting up the MCP server in Claude Desktop, you can use natural language to interact with your SQL Server instances.
## 📊 Scenario 1: Daily Performance Check
### Morning Health Check
**You ask Claude:**
```
I need to run a daily performance check on our production server 'sql-prod-01.company.com',
database 'TestDB_Prod'. Please check:
1. Overall health
2. Top 5 slow queries
3. Wait statistics
4. Any missing indexes with high impact
```
**Claude will:**
1. Run `get_performance_health_check` to get an overview
2. Use `get_slow_queries` with top_n=5
3. Execute `get_wait_statistics`
4. Check `get_missing_indexes` with high impact threshold
**What you get:**
- Comprehensive health report
- Actionable insights about slow queries
- Wait type analysis with recommendations
- Index creation scripts ready to review
## 🔍 Scenario 2: Investigating Slow Application
### User Complaint: "The app is slow"
**You ask Claude:**
```
Users are reporting slow performance on database 'TestDB_5100' on server 'sql-aws-rds.amazonaws.com'.
Can you help me investigate? Check:
- What queries are consuming the most CPU?
- Are there any blocking issues?
- Check for index fragmentation
- Look for missing indexes
```
**Claude will:**
1. Run `get_slow_queries` ordered by CPU
2. Execute `get_blocking_queries` to find locks
3. Check `get_index_fragmentation` for fragmentation issues
4. Use `get_missing_indexes` to find optimization opportunities
**Analysis Flow:**
```
Step 1: Slow Queries Analysis
└─> Found: 3 queries consuming >50% CPU
Step 2: Blocking Check
└─> Found: 2 sessions blocked by session 54
Step 3: Fragmentation
└─> Found: 15 indexes with >30% fragmentation
Step 4: Missing Indexes
└─> Found: 8 high-impact missing indexes
```
## 🛠️ Scenario 3: Index Optimization
### Weekly Index Maintenance
**You ask Claude:**
```
For database 'TestDB_5100':
1. Show me unused indexes larger than 50 MB
2. Check fragmentation on all indexes
3. Generate a maintenance script for indexes with >20% fragmentation
4. Use ONLINE rebuild where possible
```
**Claude will:**
1. Run `get_unused_indexes` with min_size_mb=50
2. Execute `get_index_fragmentation` with min_fragmentation=20
3. Generate script with `generate_index_maintenance_script`
**Output Example:**
```sql
-- Unused Indexes to Consider Dropping
DROP INDEX [IX_OldIndex_UnusedColumn] ON [dbo].[LargeTable];
-- Saves: 127.5 MB
-- Maintenance Script (Generated)
ALTER INDEX [IX_Items_DateCreated] ON [Item].[Items]
REBUILD WITH (ONLINE = ON, SORT_IN_TEMPDB = ON);
ALTER INDEX [IX_Users_Email] ON [Security].[Users]
REORGANIZE;
```
## 📈 Scenario 4: Historical Analysis
### Analyzing Last Week's Performance
**You ask Claude:**
```
I need to analyze query performance for the last 7 days on database 'TestDB_Prod'.
Use Query Store to show:
- Top 20 CPU-consuming queries
- Queries with highest logical reads
- Most frequently executed queries
```
**Claude will:**
```
1. Check if Query Store is enabled
2. Run get_query_store_top_queries with hours_back=168 (7 days)
- First ordered by CPU
- Then by logical_reads
- Finally by executions
```
**Insights provided:**
- Trends over time
- Query regression identification
- Resource consumption patterns
## 🔧 Scenario 5: Specific Table Analysis
### Deep Dive into a Problem Table
**You ask Claude:**
```
I need to analyze the 'Items' table in the 'Item' schema on database 'TestDB_5100':
- Show me all index usage statistics
- Check table size and row count
- Look for missing indexes on this table specifically
- Check if statistics are outdated
```
**Claude will:**
1. Use `get_index_usage_stats` filtered by schema='Item' and table='Items'
2. Run `get_table_sizes` for the specific table
3. Check `get_missing_indexes` and filter results for the table
4. Use `get_database_statistics` to check stats age
**Analysis Report:**
```
Table: Item.Items
├─ Size: 2,450 MB
├─ Rows: 18,500,000
├─ Indexes: 12 total
│ ├─ PK_Items (Clustered): 15M seeks, 2K scans
│ ├─ IX_Items_DateCreated: 8M seeks, 50K scans
│ ├─ IX_Items_Status: UNUSED (0 seeks, 12K updates)
│ └─ ...
├─ Missing Indexes: 2 high-impact recommendations
└─ Statistics: 3 stats older than 14 days
```
## 💡 Scenario 6: Quick Wins
### Find Easy Performance Improvements
**You ask Claude:**
```
I need quick wins to improve performance on 'TestDB_5100'. Show me:
- Top 3 missing indexes with highest impact
- Any unused indexes we can drop
- Statistics older than 30 days
```
**Claude will:**
```
1. get_missing_indexes with top_n=3
2. get_unused_indexes
3. get_database_statistics with days_old=30
```
**Recommendations:**
```
🎯 Quick Wins Identified:
1. CREATE INDEX [IX_Items_UserId_Status] (Impact: 450,000)
Estimated improvement: 40% faster queries
2. CREATE INDEX [IX_Meetings_DateRange] (Impact: 280,000)
Estimated improvement: 35% faster queries
3. DROP INDEX [IX_Archive_OldColumn] (Saves: 89 MB)
No usage in 90 days, receiving 15K updates/day
4. UPDATE STATISTICS on 8 tables (oldest: 45 days)
```
## 🚨 Scenario 7: Emergency Response
### Production is Down!
**You ask Claude:**
```
URGENT: Production database 'TestDB_Prod' is extremely slow right now!
Quick diagnosis:
- Show current blocking
- Top 3 running queries by CPU
- Top 5 wait types
```
**Claude will immediately:**
1. Run `get_blocking_queries` for current locks
2. Execute `get_slow_queries` with real-time filter
3. Check `get_wait_statistics` with top_n=5
**Immediate Insights:**
```
🚨 BLOCKING DETECTED:
Session 127 blocked by Session 54 (5 minutes)
Blocker Query: UPDATE Items SET Status = ... WHERE ItemId IN (...)
Recommendation: Consider killing session 54 or optimizing the update
⚡ TOP WAIT: WRITELOG (68%)
Recommendation: Transaction log disk is bottleneck
Action: Check log disk I/O, consider log shipping delay
```
## 📊 Scenario 8: Capacity Planning
### Understanding Database Growth
**You ask Claude:**
```
I need capacity planning data for database 'TestDB_5100':
- Show me the top 20 largest tables
- Include index sizes
- Calculate total database size
```
**Claude will:**
1. Run `get_table_sizes` with top_n=20
2. Include breakdown of data vs index space
3. Provide growth trends if available
**Report:**
```
Database: TestDB_5100
Total Size: 458 GB
Top Space Consumers:
1. Item.Items - 125 GB (Data: 85 GB, Indexes: 40 GB)
2. Meeting.Meetings - 89 GB (Data: 62 GB, Indexes: 27 GB)
3. Document.Documents - 67 GB (Data: 45 GB, Indexes: 22 GB)
...
Growth Rate: ~2 GB/week
Projected Size (6 months): 510 GB
```
## 🔄 Scenario 9: Regular Maintenance
### Monthly Maintenance Routine
**You ask Claude:**
```
Run our monthly maintenance routine for database 'TestDB_5100':
1. Generate index maintenance script (reorganize >10%, rebuild >30%)
2. List statistics that need updating (>14 days)
3. Identify unused indexes for review
4. Full health check report
```
**Claude creates a comprehensive maintenance plan:**
```
📋 Monthly Maintenance Plan - TestDB_5100
Generated: 2025-01-15
🔧 Index Maintenance (42 indexes need attention)
- 18 indexes to REORGANIZE
- 24 indexes to REBUILD
- Estimated downtime: 2-3 hours (if OFFLINE)
- Recommended: Run during maintenance window
📊 Statistics Updates (12 tables)
- 8 tables with stats >30 days old
- 4 tables with >20% modifications
- Estimated time: 15-20 minutes
🗑️ Unused Indexes for Review (5 indexes)
- Total space to reclaim: 234 MB
- Consider business logic before dropping
✅ Health Score: 78/100
- CPU: Good
- I/O: Needs attention (PAGEIOLATCH waits)
- Memory: Good
- Blocking: Minimal
```
## 💬 Natural Language Examples
### You can ask in various ways:
```
"What's slowing down my database?"
"Show me the worst performing queries"
"Are there any blocking issues right now?"
"Generate a maintenance plan for next Sunday"
"Why is this query slow: SELECT * FROM Items WHERE Status = 'Active'"
"Compare performance between today and last week"
"What indexes should I add to improve performance?"
"Which indexes are wasting space?"
"Is my transaction log causing problems?"
"Show me the biggest tables"
```
## 🎓 Pro Tips
### 1. Start Broad, Then Narrow
```
First: "Run a health check on TestDB_5100"
Then: "Focus on the Items table - show detailed analysis"
Finally: "Generate optimization scripts for Items table"
```
### 2. Use Time Ranges
```
"Show queries from Query Store for the last 2 hours"
"Find statistics updated more than 60 days ago"
"What was blocking yesterday at 3 PM?" (if you have historical data)
```
### 3. Combine Multiple Checks
```
"For database X: check health, find slow queries, generate maintenance script"
```
### 4. Ask for Explanations
```
"What does PAGEIOLATCH wait type mean?"
"Why is this index unused even though the table has queries?"
"Explain the difference between REORGANIZE and REBUILD"
```
### 5. Request Different Formats
```
"Give me the top 10 slow queries as a table"
"Format the wait statistics as a summary"
"Show me just the CREATE INDEX statements"
```
## 🔐 Security Note
**Never commit credentials!** Always provide them in the conversation:
```
✅ Good:
"Check slow queries on server 'sql-prod' database 'MyDB' with user 'monitor_user'"
(Claude will ask for password)
❌ Bad:
"Use password 'SuperSecret123' to connect..."
```
## 📈 Measuring Success
### Before Optimization
```
Average query duration: 2,500ms
CPU usage: 85%
Wait time (PAGEIOLATCH): 45%
Blocking chains: 5-10 daily
```
### After Using MCP Tools + Implementing Recommendations
```
Average query duration: 450ms (82% improvement)
CPU usage: 35%
Wait time (PAGEIOLATCH): 8%
Blocking chains: 0-1 daily
```
**Remember**: This MCP server is a diagnostic tool. Always test recommendations in non-production environments first!