claude-git-hooks
Version:
Git hooks with Claude CLI for code analysis and automatic commit messages
657 lines (489 loc) • 23.2 kB
Markdown
# Preset Customization Guide
**Purpose**: Learn how to create custom presets or modify existing ones for your tech stack.
**Audience**: Developers customizing claude-hooks for specific technologies.
## ⚠️ Known Issue: Claude API Burst Limits
**Symptom**: Rate limit errors during git commits, even though manual `claude` CLI commands work fine.
**Why**: Git hooks fire multiple Claude API requests simultaneously (parallel execution), triggering burst limits:
- **Hourly limit**: ~100 requests/hour (manual CLI respects this)
- **Burst limit**: ~5 requests per 10 seconds (git hooks can exceed this)
**Example**:
```bash
# Manual CLI (works fine - sequential with natural delays)
claude --prompt "test1" # Request 1
# ... 30 seconds later ...
claude --prompt "test2" # Request 2
# Git hooks (hits burst limit - simultaneous)
git commit -m "auto"
→ Pre-commit: 2 parallel processes ⚡⚡ (instant)
→ Prepare-commit-msg: 1 process ⚡ (3 seconds later)
→ Total: 3 requests in 3 seconds = RATE_LIMIT
```
**Solution**: Disable parallel execution in `.claude/config.json`:
```json
{
"subagents": {
"enabled": false
}
}
```
**Effect**: Requests become sequential → No burst limit issues (slower but reliable).
## Table of Contents
1. [Understanding the Preset System](#understanding-the-preset-system)
2. [How Analysis Prompts Are Built](#how-analysis-prompts-are-built)
3. [Creating a New Preset](#creating-a-new-preset)
4. [Modifying Existing Presets](#modifying-existing-presets)
5. [Placeholder System](#placeholder-system)
6. [Preset Kickstart Prompt](#preset-kickstart-prompt)
## Understanding the Preset System
### What is a Preset?
A preset is a **self-contained package** that customizes claude-hooks for a specific technology stack. It includes:
- **Metadata** - Tech stack, file extensions, focus areas
- **Configuration** - Analysis settings, timeouts, parallel processing
- **Templates** - Custom prompts and guidelines for Claude
### Directory Structure
```
.claude/presets/{preset-name}/
├── preset.json # Metadata (tech stack, file filters, focus areas)
├── config.json # Configuration overrides (optional)
├── ANALYSIS_PROMPT.md # Custom analysis prompt template
├── PRE_COMMIT_GUIDELINES.md # Evaluation criteria for Claude
└── (other templates) # Can reference ../shared/ templates
```
### Preset Components
| File | Required | Purpose | Example |
| -------------------------- | -------- | -------------------------------------- | ---------------------------------------- |
| `preset.json` | **Yes** | Metadata, file extensions, focus areas | Tech stack list, `.java` files |
| `config.json` | No | Override analysis settings | Timeout, model selection |
| `ANALYSIS_PROMPT.md` | **Yes** | Main prompt template for Claude | "You are analyzing a backend project..." |
| `PRE_COMMIT_GUIDELINES.md` | **Yes** | Evaluation criteria | "Check for SQL injection..." |
**Shared templates**: Use `../shared/TEMPLATE_NAME.md` to reuse common templates (commit messages, PR analysis, etc.)
## How Analysis Prompts Are Built
### Visual Flow
```
┌─────────────────────────────────────────────────────────────┐
│ 1. USER SELECTS PRESET │
│ claude-hooks --set-preset backend │
└────────────────────────┬────────────────────────────────────┘
│
┌────────────────────────▼────────────────────────────────────┐
│ 2. PRESET LOADED FROM DISK │
│ .claude/presets/backend/ │
│ ├── preset.json → metadata │
│ ├── config.json → configuration │
│ └── ANALYSIS_PROMPT.md → template │
└────────────────────────┬────────────────────────────────────┘
│
┌────────────────────────▼────────────────────────────────────┐
│ 3. CONFIGURATION MERGED │
│ defaults < user config < preset config │
│ - File extensions: ['.java', '.xml', '.yml'] │
│ - Max file size: 1MB │
│ - Parallel analysis: enabled │
└────────────────────────┬────────────────────────────────────┘
│
┌────────────────────────▼────────────────────────────────────┐
│ 4. STAGED FILES FILTERED │
│ git diff --cached --name-only │
│ - Filter by preset.fileExtensions │
│ - Filter by config.analysis.maxFileSize │
│ - Limit to config.analysis.maxFiles │
│ Result: [UserController.java, UserService.java] │
└────────────────────────┬────────────────────────────────────┘
│
┌────────────────────────▼────────────────────────────────────┐
│ 5. TEMPLATES LOADED & PLACEHOLDERS REPLACED │
│ Load: ANALYSIS_PROMPT.md │
│ Replace: │
│ {{PRESET_NAME}} → "backend" │
│ {{TECH_STACK}} → "Spring Boot, JPA, SQL Server" │
│ {{FILE_EXTENSIONS}} → ".java, .xml, .yml" │
│ {{FOCUS_AREAS}} → "REST APIs, JPA, Security..." │
└────────────────────────┬────────────────────────────────────┘
│
┌────────────────────────▼────────────────────────────────────┐
│ 6. PROMPT CONSTRUCTED │
│ [ANALYSIS_PROMPT.md] │
│ + [PRE_COMMIT_GUIDELINES.md] │
│ + [SUBAGENT_INSTRUCTION.md] (if parallel enabled) │
│ + [File diffs section] │
│ Result: Complete prompt ready for Claude │
└────────────────────────┬────────────────────────────────────┘
│
┌────────────────────────▼────────────────────────────────────┐
│ 7. SENT TO CLAUDE │
│ claude --prompt "..." --model sonnet │
│ - Single analysis (1-2 files) │
│ - OR parallel analysis (3+ files, batched) │
└────────────────────────┬────────────────────────────────────┘
│
┌────────────────────────▼────────────────────────────────────┐
│ 8. RESPONSE PROCESSED │
│ - JSON format with structured results │
│ - Quality gate: PASSED/FAILED │
│ - Issues by severity: blocker, critical, major, minor │
│ - Commit proceeds or blocked │
└─────────────────────────────────────────────────────────────┘
```
### Prompt Assembly Example
**Input files:**
- `ANALYSIS_PROMPT.md` (50 lines) - Tech-stack-specific context
- `PRE_COMMIT_GUIDELINES.md` (100 lines) - Evaluation criteria
- File diffs (variable) - Actual code changes
**Output prompt:**
```markdown
You are analyzing a backend project...
Tech Stack: Spring Boot, JPA, SQL Server
Focus Areas: REST APIs, Security, Transaction management
=== EVALUATION GUIDELINES ===
1. Security First: Check for OWASP Top 10...
2. Spring Boot Best Practices...
=== CHANGES TO REVIEW ===
--- File: src/main/java/UserController.java ---
@@ -10,5 +10,8 @@
+public UserDTO getUser(@PathVariable Long id) {
- return userService.findById(id);
+}
```
## Creating a New Preset
### Step 1: Choose Preset Directory Name
Convention: `{tech-stack}` (lowercase, hyphenated)
Examples:
- `python-django`
- `go-microservices`
- `mobile-react-native`
```bash
cd templates/presets
mkdir python-django
cd python-django
```
### Step 2: Create `preset.json`
**Purpose**: Metadata that defines what your preset analyzes and focuses on.
**Why each field matters**:
- `fileExtensions` → Filters which files are analyzed (performance + relevance)
- `focusAreas` → Directs Claude's attention to tech-stack-specific concerns
- `techStack` → Provides context to Claude about the project
**Template**:
```json
{
"name": "python-django",
"displayName": "Python Django",
"description": "Python web applications with Django, PostgreSQL, Celery",
"version": "1.0.0",
"techStack": [
"Python 3.9+",
"Django 4.x",
"Django REST Framework",
"PostgreSQL",
"Celery",
"Redis",
"pytest"
],
"fileExtensions": [".py", ".txt", ".yml", ".yaml"],
"focusAreas": [
"Django ORM query optimization (N+1 queries)",
"Security vulnerabilities (OWASP Top 10)",
"SQL injection prevention in raw queries",
"Proper use of Django middleware",
"Celery task design and error handling",
"API serializer validation",
"Test coverage and pytest best practices"
],
"templates": {
"analysis": "ANALYSIS_PROMPT.md",
"guidelines": "PRE_COMMIT_GUIDELINES.md",
"commitMessage": "../shared/COMMIT_MESSAGE.md",
"analyzeDiff": "../shared/ANALYZE_DIFF.md",
"resolution": "../shared/RESOLUTION_PROMPT.md"
}
}
```
**Field reference**:
- `name` - Internal identifier (must match directory name)
- `displayName` - Human-readable name shown in UI
- `description` - Brief tech stack summary
- `version` - Semantic version for tracking changes
- `techStack` - Array of technologies (injected as `{{TECH_STACK}}`)
- `fileExtensions` - Files to analyze (filters `git diff` output)
- `focusAreas` - Array of concerns (injected as `{{FOCUS_AREAS}}`)
- `templates` - Relative paths to template files
### Step 3: Create `config.json` (Optional)
**Purpose**: Override default analysis settings for this tech stack.
**Why**: Different tech stacks have different needs:
- Django projects → Larger models, need more `maxFileSize`
- Microservices → Many small files, increase `maxFiles`
- Complex analysis → Increase `timeout`
**Template**:
```json
{
"analysis": {
"maxFileSize": 1000000,
"maxFiles": 12,
"timeout": 180000
},
"subagents": {
"enabled": true,
"model": "sonnet",
"batchSize": 3
}
}
```
**When to override**:
- `maxFileSize` - Increase if files are typically large (models with many fields)
- `maxFiles` - Adjust based on typical commit size
- `timeout` - Increase for complex analysis (fullstack presets)
- `model` - Use `sonnet`/`opus` for critical stacks, `haiku` for speed
**Default values** (from `lib/config.js`):
```javascript
{
analysis: {
maxFileSize: 1000000, // 1MB
maxFiles: 30,
timeout: 180000 // 3 minutes
},
subagents: {
enabled: true,
model: 'haiku',
batchSize: 3
}
}
```
### Step 4: Create `ANALYSIS_PROMPT.md`
**Purpose**: Tech-stack-specific instructions that prepend the analysis.
**Why**: Gives Claude context about the project before showing code.
**Template**:
```markdown
You are analyzing a **{{PRESET_NAME}}** project with the following technology stack:
**Tech Stack:** {{TECH_STACK}}
**Analyzing files matching:** {{FILE_EXTENSIONS}}
## Your Task
Perform a comprehensive code quality analysis focusing on these areas:
{{FOCUS_AREAS}}
## Analysis Guidelines
1. **Security First**: Check for OWASP Top 10 vulnerabilities, especially:
- SQL injection in raw queries
- XSS in template rendering
- CSRF token validation
- Insecure authentication
2. **Django Best Practices**:
- Proper use of Django ORM (avoid N+1 queries)
- Correct exception handling
- Appropriate use of serializers vs models
- Proper middleware usage
3. **Performance**:
- Database query optimization
- select_related() and prefetch_related()
- Caching strategies
- Celery task efficiency
4. **Code Quality**:
- PEP 8 compliance
- DRY violations
- Proper error handling
- Test coverage
```
**Customization tips**:
- **Section 1**: Always start with security (highest priority)
- **Section 2**: Framework-specific patterns and anti-patterns
- **Section 3**: Performance concerns for this stack
- **Section 4**: General code quality (reusable across stacks)
### Step 5: Create `PRE_COMMIT_GUIDELINES.md`
**Purpose**: Detailed evaluation criteria for Claude.
**Why**: Provides concrete examples of what to look for and how to report issues.
**Template**:
````markdown
# Code Quality Evaluation Criteria
## Severity Levels
- **BLOCKER**: Security vulnerabilities, data loss risks
- **CRITICAL**: Major bugs, performance killers
- **MAJOR**: Code smells, maintainability issues
- **MINOR**: Style violations, minor improvements
- **INFO**: Suggestions and best practices
## Django-Specific Rules
### BLOCKER Issues
1. **SQL Injection in Raw Queries**
- Pattern: `.raw()` or `.extra()` with string formatting
- Example: `User.objects.raw(f"SELECT * FROM users WHERE id = {user_id}")`
- Why blocker: Direct security vulnerability
2. **Missing CSRF Protection**
- Pattern: `@csrf_exempt` without justification
- Why blocker: Opens XSS attack vectors
### CRITICAL Issues
1. **N+1 Query Problem**
- Pattern: Accessing related objects in a loop without `select_related()`
- Example:
```python
users = User.objects.all()
for user in users:
print(user.profile.bio) # N+1 query
```
- Why critical: Performance degradation at scale
2. **Unhandled Exceptions in Views**
- Pattern: No try/except in view functions
- Why critical: Exposes stack traces to users
### MAJOR Issues
1. **Missing Migrations**
- Pattern: Model changes without corresponding migration files
- Why major: Deployment will fail
2. **Hardcoded Secrets**
- Pattern: API keys, passwords in code
- Example: `API_KEY = "sk_live_..."`
- Why major: Security risk if committed
## Output Format
Always return JSON in this exact structure:
```json
{
"QUALITY_GATE": "PASSED or FAILED",
"metrics": {
"reliability": "A-E",
"security": "A-E",
"maintainability": "A-E"
},
"issuesSummary": {
"blocker": 0,
"critical": 0,
"major": 2,
"minor": 1,
"info": 3
},
"blockingIssues": []
}
```
````
````
**Customization tips**:
- Start with your framework's most common security issues
- Include real code examples (helps Claude recognize patterns)
- Explain "why" for each severity level (helps calibration)
- Reference official documentation when possible
### Step 6: Test Your Preset
```bash
# 1. Install preset (from repo root)
cd ../../.. # Back to repo root
claude-hooks install --force
# 2. Select your new preset
claude-hooks --set-preset python-django
# 3. Test on a sample commit
git add .
git commit -m "test: verify preset works"
# 4. Review analysis output
# Look for:
# - Correct file filtering
# - Tech-stack-specific focus areas mentioned
# - Appropriate severity levels
````
## Modifying Existing Presets
### User-Level Customization
**Location**: `.claude/` (in your repository, not in `templates/presets/`)
**Why**: Project-specific overrides without modifying the preset itself.
**Steps**:
```bash
# 1. Copy preset to your repo's .claude directory
cp templates/presets/backend/ANALYSIS_PROMPT.md .claude/
# 2. Edit .claude/ANALYSIS_PROMPT.md
vim .claude/ANALYSIS_PROMPT.md
# 3. Commit continues to work
git commit -m "test"
# Now uses YOUR custom prompt instead of preset's
```
**Precedence**:
```
.claude/ANALYSIS_PROMPT.md (highest priority)
↓
templates/presets/backend/ANALYSIS_PROMPT.md
↓
templates/shared/ANALYSIS_PROMPT.md (fallback)
```
## Placeholder System
### Available Placeholders
| Placeholder | Source | Example Value | When Replaced |
| --------------------- | ------------------------------------- | ------------------------------------- | -------------------------- |
| `{{PRESET_NAME}}` | `preset.json` → `name` | `"backend"` | Template load |
| `{{TECH_STACK}}` | `preset.json` → `techStack` | `"Spring Boot, JPA, SQL Server"` | Template load |
| `{{FILE_EXTENSIONS}}` | `preset.json` → `fileExtensions` | `".java, .xml, .yml"` | Template load |
| `{{FOCUS_AREAS}}` | `preset.json` → `focusAreas` | `"REST APIs, Security..."` (bulleted) | Template load |
| `{{REPO_NAME}}` | Git metadata | `"my-project"` | Prompt build |
| `{{BRANCH_NAME}}` | Git metadata | `"feature/new-endpoint"` | Prompt build |
| `{{BATCH_SIZE}}` | `config.json` → `subagents.batchSize` | `"3"` | Prompt build (if parallel) |
| `{{MODEL}}` | `config.json` → `subagents.model` | `"haiku"` | Prompt build (if parallel) |
## Preset Kickstart Prompt
Use this prompt with Claude (in the same directory) to generate a new preset:
### 🤖 PRESET GENERATION PROMPT
```markdown
I need to create a new claude-hooks preset for the following technology stack:
**Tech Stack**: [e.g., Python Django, Go microservices, React Native]
**Primary Technologies**:
- [List 5-10 main technologies]
- [e.g., Django 4.x, PostgreSQL, Celery, Redis, pytest]
**File Extensions to Analyze**:
- [e.g., .py, .yml, .yaml, requirements.txt]
**Key Concerns for This Stack**:
- [e.g., Django ORM N+1 queries]
- [e.g., SQL injection in raw queries]
- [e.g., Celery task error handling]
**Common Vulnerabilities**:
- [e.g., XSS in Django templates]
- [e.g., CSRF protection]
**Performance Gotchas**:
- [e.g., Missing select_related(), database query count]
Please generate the following files:
1. **preset.json** - Include name, displayName, description, version, techStack, fileExtensions, focusAreas, and templates
2. **config.json** - Suggest appropriate maxFileSize, maxFiles, timeout, and model based on typical file sizes and analysis complexity for this stack
3. **ANALYSIS_PROMPT.md** - Create a tech-stack-specific analysis prompt that:
- Uses placeholders: {{PRESET_NAME}}, {{TECH_STACK}}, {{FILE_EXTENSIONS}}, {{FOCUS_AREAS}}
- Includes 4 sections: Security First, Framework Best Practices, Performance, Code Quality
- Provides concrete examples of what to look for
4. **PRE_COMMIT_GUIDELINES.md** - Create detailed evaluation criteria with:
- Severity levels (BLOCKER, CRITICAL, MAJOR, MINOR, INFO)
- Tech-stack-specific rules with code examples
- "Why" explanations for each severity level
- JSON output format specification
**Context**:
- This preset will be installed at: `templates/presets/{preset-name}/`
- It should follow the same structure as existing presets in `templates/presets/backend/`
- Users will select it with: `claude-hooks --set-preset {preset-name}`
- The preset must output JSON with structured analysis results
**Format**: Return each file's content as a separate code block with clear file headers.
```
### Usage Example
```bash
# 1. Navigate to presets directory
cd templates/presets
# 2. Create new preset directory
mkdir python-django
# 3. Open Claude (in same directory context)
# 4. Paste the kickstart prompt above (filled with your tech stack)
# 5. Copy generated files into python-django/
# 6. Test and iterate
# 7. Install and test
cd ../../..
claude-hooks install --force
claude-hooks --set-preset python-django
```
## Best Practices
### ✅ Do
- **Include examples** - Real code snippets help Claude recognize patterns
- **Explain "why"** - Severity justifications improve analysis consistency
- **Reuse shared templates** - Use `../shared/` for common templates (commit messages, etc.)
### ❌ Don't
- **Over-specify** - Too many rules overwhelm Claude and cause false positives
- **Skip security** - Always make security the #1 priority in guidelines
- **Forget fallbacks** - Provide generic rules for edge cases
## Additional Resources
- **Example presets**: `templates/presets/backend/`, `templates/presets/frontend/`
- **Utility functions**: `lib/utils/preset-loader.js` - See how templates are loaded
- **Configuration reference**: `lib/config.js` - Default values and merge logic
- **Prompt building**: `lib/utils/prompt-builder.js` - How prompts are assembled
**Questions?** good luck