backend-mcp
Version:
Generador automático de backends con Node.js, Express, Prisma y módulos configurables. Servidor MCP compatible con npx para agentes IA. Soporta PostgreSQL, MySQL, MongoDB y SQLite.
826 lines (683 loc) • 21.8 kB
Markdown
# 📦 Módulo logging
**Versión:** 1.0.0
**Categoría:** monitoring
**Descripción:** Sistema completo de logging estructurado con Winston y auditoría
## 📊 Estado del Módulo
| Componente | Estado |
|------------|--------|
| Script de inicialización | ✅ Disponible |
| Templates | ✅ Disponible |
| Ejemplos | ❌ Faltante |
## 🔗 Dependencias
### Requeridas
- `database`
### Opcionales
- `auth`
- `email`
## 🤖 Triggers para IA
Este módulo se activa automáticamente cuando se detectan las siguientes palabras clave:
- **user_wants_logging**: true
- **needs_monitoring**: true
- **requires_audit**: true
- **has_debugging**: true
- **undefined**: undefined
## ✨ Características
- structured-logging
- multiple-transports
- log-rotation
- audit-trail
- error-tracking
- performance-monitoring
- request-logging
- database-logging
- file-logging
- console-logging
- log-filtering
- log-formatting
## 📖 Documentación Completa
# Logging Module
Comprehensive logging and audit trail module for MCP Backend framework.
## Features
- 📝 Structured logging with multiple levels
- 📊 Request/response logging middleware
- 🔍 Audit trail and activity tracking
- 📁 Multiple output formats (JSON, text, custom)
- 🔄 Log rotation and archiving
- 🚀 High-performance async logging
- 🎯 Contextual logging with correlation IDs
- 📈 Performance metrics logging
- 🔒 Security event logging
- 🌐 Distributed tracing support
## Installation
This module is automatically installed when using the MCP Backend Generator.
## Configuration
### Environment Variables
**General Configuration:**
- `LOG_LEVEL` (optional) - Logging level (debug, info, warn, error) (default: info)
- `LOG_FORMAT` (optional) - Log format (json, text, custom) (default: json)
- `LOG_TIMESTAMP` (optional) - Include timestamps (default: true)
- `LOG_COLORIZE` (optional) - Colorize console output (default: true)
**File Logging:**
- `LOG_FILE_ENABLED` (optional) - Enable file logging (default: true)
- `LOG_FILE_PATH` (optional) - Log file path (default: ./logs)
- `LOG_FILE_MAX_SIZE` (optional) - Maximum log file size (default: 10MB)
- `LOG_FILE_MAX_FILES` (optional) - Maximum number of log files (default: 5)
- `LOG_FILE_DATE_PATTERN` (optional) - Date pattern for rotation (default: YYYY-MM-DD)
**Database Logging:**
- `LOG_DB_ENABLED` (optional) - Enable database logging (default: false)
- `LOG_DB_TABLE` (optional) - Database table name (default: logs)
- `LOG_DB_BATCH_SIZE` (optional) - Batch size for database writes (default: 100)
- `LOG_DB_FLUSH_INTERVAL` (optional) - Flush interval in ms (default: 5000)
**External Services:**
- `LOG_ELASTICSEARCH_ENABLED` (optional) - Enable Elasticsearch logging (default: false)
- `LOG_ELASTICSEARCH_URL` (optional) - Elasticsearch URL
- `LOG_ELASTICSEARCH_INDEX` (optional) - Elasticsearch index pattern (default: logs-YYYY.MM.DD)
- `LOG_SENTRY_ENABLED` (optional) - Enable Sentry error logging (default: false)
- `LOG_SENTRY_DSN` (optional) - Sentry DSN
- `LOG_SLACK_ENABLED` (optional) - Enable Slack notifications (default: false)
- `LOG_SLACK_WEBHOOK_URL` (optional) - Slack webhook URL
**Performance:**
- `LOG_ASYNC_ENABLED` (optional) - Enable async logging (default: true)
- `LOG_BUFFER_SIZE` (optional) - Log buffer size (default: 1000)
- `LOG_FLUSH_INTERVAL` (optional) - Buffer flush interval in ms (default: 1000)
### Configuration File
```typescript
// src/config/logging.ts
export const loggingConfig = {
level: process.env.LOG_LEVEL || 'info',
format: process.env.LOG_FORMAT || 'json',
timestamp: process.env.LOG_TIMESTAMP !== 'false',
colorize: process.env.LOG_COLORIZE !== 'false',
file: {
enabled: process.env.LOG_FILE_ENABLED !== 'false',
path: process.env.LOG_FILE_PATH || './logs',
maxSize: process.env.LOG_FILE_MAX_SIZE || '10m',
maxFiles: parseInt(process.env.LOG_FILE_MAX_FILES || '5'),
datePattern: process.env.LOG_FILE_DATE_PATTERN || 'YYYY-MM-DD'
},
database: {
enabled: process.env.LOG_DB_ENABLED === 'true',
table: process.env.LOG_DB_TABLE || 'logs',
batchSize: parseInt(process.env.LOG_DB_BATCH_SIZE || '100'),
flushInterval: parseInt(process.env.LOG_DB_FLUSH_INTERVAL || '5000')
},
external: {
elasticsearch: {
enabled: process.env.LOG_ELASTICSEARCH_ENABLED === 'true',
url: process.env.LOG_ELASTICSEARCH_URL,
index: process.env.LOG_ELASTICSEARCH_INDEX || 'logs-YYYY.MM.DD'
},
sentry: {
enabled: process.env.LOG_SENTRY_ENABLED === 'true',
dsn: process.env.LOG_SENTRY_DSN
},
slack: {
enabled: process.env.LOG_SLACK_ENABLED === 'true',
webhookUrl: process.env.LOG_SLACK_WEBHOOK_URL
}
},
performance: {
asyncEnabled: process.env.LOG_ASYNC_ENABLED !== 'false',
bufferSize: parseInt(process.env.LOG_BUFFER_SIZE || '1000'),
flushInterval: parseInt(process.env.LOG_FLUSH_INTERVAL || '1000')
}
};
```
## Usage
### Basic Logging
```typescript
import { logger } from './services/logging';
// Basic logging levels
logger.debug('Debug information', { userId: 123, action: 'debug' });
logger.info('Information message', { event: 'user_login', userId: 123 });
logger.warn('Warning message', { event: 'rate_limit_approaching', userId: 123 });
logger.error('Error occurred', { error: new Error('Something went wrong'), userId: 123 });
// Structured logging with metadata
logger.info('User action performed', {
userId: 123,
action: 'create_post',
postId: 456,
timestamp: new Date().toISOString(),
ip: '192.168.1.1',
userAgent: 'Mozilla/5.0...'
});
// Performance logging
const startTime = Date.now();
// ... some operation
const duration = Date.now() - startTime;
logger.info('Operation completed', {
operation: 'database_query',
duration,
query: 'SELECT * FROM users',
resultCount: 150
});
// Error logging with stack trace
try {
// Some risky operation
throw new Error('Database connection failed');
} catch (error) {
logger.error('Database operation failed', {
error: error.message,
stack: error.stack,
operation: 'user_fetch',
userId: 123
});
}
```
### Express Middleware
```typescript
import { loggingMiddleware } from './middleware/logging';
import express from 'express';
const app = express();
// Request/response logging middleware
app.use(loggingMiddleware.requests({
includeBody: true,
includeHeaders: true,
excludePaths: ['/health', '/metrics'],
sanitizeFields: ['password', 'token', 'authorization']
}));
// Error logging middleware
app.use(loggingMiddleware.errors());
// Custom middleware for specific routes
app.use('/api/admin', loggingMiddleware.audit({
logLevel: 'warn',
includeUser: true,
includePermissions: true
}));
// Example route with contextual logging
app.post('/api/users', async (req, res) => {
const correlationId = req.headers['x-correlation-id'] || generateId();
// Create child logger with context
const contextLogger = logger.child({
correlationId,
userId: req.user?.id,
endpoint: '/api/users',
method: 'POST'
});
contextLogger.info('Creating new user', { email: req.body.email });
try {
const user = await userService.create(req.body);
contextLogger.info('User created successfully', {
userId: user.id,
email: user.email
});
res.json(user);
} catch (error) {
contextLogger.error('User creation failed', {
error: error.message,
email: req.body.email
});
res.status(500).json({ error: 'Internal server error' });
}
});
```
### Audit Trail
```typescript
import { auditLogger } from './services/auditLogger';
// User actions audit
const logUserAction = async (userId, action, details = {}) => {
await auditLogger.log({
type: 'user_action',
userId,
action,
details,
timestamp: new Date(),
ip: details.ip,
userAgent: details.userAgent
});
};
// Security events audit
const logSecurityEvent = async (event, severity, details = {}) => {
await auditLogger.log({
type: 'security_event',
event,
severity,
details,
timestamp: new Date(),
ip: details.ip,
userAgent: details.userAgent
});
};
// Data access audit
const logDataAccess = async (userId, resource, operation, details = {}) => {
await auditLogger.log({
type: 'data_access',
userId,
resource,
operation,
details,
timestamp: new Date(),
sensitive: details.sensitive || false
});
};
// Usage examples
app.post('/api/login', async (req, res) => {
try {
const user = await authService.login(req.body.email, req.body.password);
await logUserAction(user.id, 'login', {
ip: req.ip,
userAgent: req.get('User-Agent'),
success: true
});
res.json({ token: user.token });
} catch (error) {
await logSecurityEvent('failed_login', 'medium', {
email: req.body.email,
ip: req.ip,
userAgent: req.get('User-Agent'),
error: error.message
});
res.status(401).json({ error: 'Invalid credentials' });
}
});
app.get('/api/users/:id', async (req, res) => {
const user = await userService.findById(req.params.id);
await logDataAccess(req.user.id, 'user', 'read', {
targetUserId: req.params.id,
sensitive: true
});
res.json(user);
});
app.delete('/api/users/:id', async (req, res) => {
await userService.delete(req.params.id);
await logUserAction(req.user.id, 'delete_user', {
targetUserId: req.params.id,
ip: req.ip,
userAgent: req.get('User-Agent')
});
res.status(204).send();
});
```
### Performance Monitoring
```typescript
import { performanceLogger } from './services/performanceLogger';
// Function execution timing
const timeFunction = (fn, name) => {
return async (...args) => {
const startTime = process.hrtime.bigint();
try {
const result = await fn(...args);
const endTime = process.hrtime.bigint();
const duration = Number(endTime - startTime) / 1000000; // Convert to milliseconds
performanceLogger.info('Function executed', {
function: name,
duration,
success: true,
args: args.length
});
return result;
} catch (error) {
const endTime = process.hrtime.bigint();
const duration = Number(endTime - startTime) / 1000000;
performanceLogger.error('Function failed', {
function: name,
duration,
success: false,
error: error.message,
args: args.length
});
throw error;
}
};
};
// Database query performance
const logDatabaseQuery = (query, duration, resultCount) => {
performanceLogger.info('Database query executed', {
query: query.substring(0, 100), // Truncate long queries
duration,
resultCount,
slow: duration > 1000 // Flag slow queries
});
};
// API endpoint performance
const performanceMiddleware = (req, res, next) => {
const startTime = Date.now();
res.on('finish', () => {
const duration = Date.now() - startTime;
performanceLogger.info('API request completed', {
method: req.method,
url: req.url,
statusCode: res.statusCode,
duration,
userAgent: req.get('User-Agent'),
ip: req.ip,
slow: duration > 2000 // Flag slow requests
});
});
next();
};
// Memory usage monitoring
const logMemoryUsage = () => {
const usage = process.memoryUsage();
performanceLogger.info('Memory usage', {
rss: Math.round(usage.rss / 1024 / 1024), // MB
heapTotal: Math.round(usage.heapTotal / 1024 / 1024), // MB
heapUsed: Math.round(usage.heapUsed / 1024 / 1024), // MB
external: Math.round(usage.external / 1024 / 1024), // MB
timestamp: new Date().toISOString()
});
};
// Log memory usage every 30 seconds
setInterval(logMemoryUsage, 30000);
```
### Custom Log Formats
```typescript
import winston from 'winston';
// Custom JSON format
const customJsonFormat = winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json(),
winston.format.printf(({ timestamp, level, message, ...meta }) => {
return JSON.stringify({
'@timestamp': timestamp,
level: level.toUpperCase(),
message,
service: 'mcp-backend',
environment: process.env.NODE_ENV || 'development',
...meta
});
})
);
// Custom text format
const customTextFormat = winston.format.combine(
winston.format.timestamp({ format: 'YYYY-MM-DD HH:mm:ss' }),
winston.format.errors({ stack: true }),
winston.format.printf(({ timestamp, level, message, ...meta }) => {
const metaStr = Object.keys(meta).length ? JSON.stringify(meta) : '';
return `${timestamp} [${level.toUpperCase()}] ${message} ${metaStr}`;
})
);
// ELK Stack format
const elkFormat = winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.printf(({ timestamp, level, message, ...meta }) => {
return JSON.stringify({
'@timestamp': timestamp,
'@version': '1',
level,
message,
host: require('os').hostname(),
service: 'mcp-backend',
...meta
});
})
);
// Structured format for monitoring
const monitoringFormat = winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.printf(({ timestamp, level, message, ...meta }) => {
const structured = {
time: timestamp,
severity: level.toUpperCase(),
msg: message,
service: 'mcp-backend',
version: process.env.APP_VERSION || '1.0.0',
pid: process.pid,
hostname: require('os').hostname()
};
// Add metadata
Object.assign(structured, meta);
return JSON.stringify(structured);
})
);
```
### Log Aggregation and Analysis
```typescript
import { logAnalyzer } from './services/logAnalyzer';
// Error rate analysis
const analyzeErrorRate = async (timeRange = '1h') => {
const analysis = await logAnalyzer.getErrorRate({
timeRange,
groupBy: ['endpoint', 'statusCode'],
threshold: 0.05 // 5% error rate threshold
});
if (analysis.errorRate > analysis.threshold) {
logger.warn('High error rate detected', {
errorRate: analysis.errorRate,
threshold: analysis.threshold,
timeRange,
topErrors: analysis.topErrors
});
}
return analysis;
};
// Performance analysis
const analyzePerformance = async (endpoint, timeRange = '1h') => {
const analysis = await logAnalyzer.getPerformanceMetrics({
endpoint,
timeRange,
metrics: ['avg', 'p95', 'p99', 'max']
});
if (analysis.p95 > 2000) { // 2 seconds threshold
logger.warn('Slow endpoint detected', {
endpoint,
p95: analysis.p95,
p99: analysis.p99,
requestCount: analysis.requestCount
});
}
return analysis;
};
// Security analysis
const analyzeSecurityEvents = async (timeRange = '1h') => {
const analysis = await logAnalyzer.getSecurityEvents({
timeRange,
events: ['failed_login', 'suspicious_activity', 'rate_limit_exceeded'],
groupBy: ['ip', 'userAgent']
});
// Detect potential attacks
const suspiciousIPs = analysis.events
.filter(event => event.count > 10) // More than 10 failed attempts
.map(event => event.ip);
if (suspiciousIPs.length > 0) {
logger.error('Potential security threat detected', {
suspiciousIPs,
timeRange,
totalEvents: analysis.totalEvents
});
}
return analysis;
};
// Automated analysis scheduler
const scheduleAnalysis = () => {
// Run every 5 minutes
setInterval(async () => {
try {
await Promise.all([
analyzeErrorRate('5m'),
analyzePerformance('/api/users', '5m'),
analyzeSecurityEvents('5m')
]);
} catch (error) {
logger.error('Log analysis failed', { error: error.message });
}
}, 5 * 60 * 1000);
};
```
### Log Retention and Archiving
```typescript
import { logRetentionService } from './services/logRetention';
// Configure retention policies
const retentionPolicies = {
debug: '7d', // Keep debug logs for 7 days
info: '30d', // Keep info logs for 30 days
warn: '90d', // Keep warning logs for 90 days
error: '1y', // Keep error logs for 1 year
audit: '7y' // Keep audit logs for 7 years
};
// Archive old logs
const archiveLogs = async () => {
for (const [level, retention] of Object.entries(retentionPolicies)) {
try {
const archived = await logRetentionService.archive({
level,
olderThan: retention,
destination: 's3://logs-archive/',
compress: true,
encrypt: true
});
logger.info('Logs archived', {
level,
filesArchived: archived.fileCount,
sizeArchived: archived.totalSize,
destination: archived.destination
});
} catch (error) {
logger.error('Log archiving failed', {
level,
error: error.message
});
}
}
};
// Clean up old logs
const cleanupLogs = async () => {
for (const [level, retention] of Object.entries(retentionPolicies)) {
try {
const cleaned = await logRetentionService.cleanup({
level,
olderThan: retention,
dryRun: false
});
logger.info('Logs cleaned up', {
level,
filesDeleted: cleaned.fileCount,
sizeFreed: cleaned.totalSize
});
} catch (error) {
logger.error('Log cleanup failed', {
level,
error: error.message
});
}
}
};
// Schedule retention tasks
const scheduleRetention = () => {
// Archive logs daily at 2 AM
const archiveJob = schedule.scheduleJob('0 2 * * *', archiveLogs);
// Clean up logs weekly on Sunday at 3 AM
const cleanupJob = schedule.scheduleJob('0 3 * * 0', cleanupLogs);
logger.info('Log retention scheduled', {
archiveSchedule: '0 2 * * *',
cleanupSchedule: '0 3 * * 0'
});
};
```
## Testing
```typescript
// tests/logging.test.ts
import { logger } from '../src/services/logging';
import { loggingMiddleware } from '../src/middleware/logging';
import request from 'supertest';
import express from 'express';
describe('Logging Service', () => {
let logSpy;
beforeEach(() => {
logSpy = jest.spyOn(logger, 'info');
});
afterEach(() => {
logSpy.mockRestore();
});
it('should log messages with correct format', () => {
logger.info('Test message', { userId: 123 });
expect(logSpy).toHaveBeenCalledWith('Test message', { userId: 123 });
});
it('should include correlation ID in child logger', () => {
const childLogger = logger.child({ correlationId: 'test-123' });
childLogger.info('Child log message');
expect(logSpy).toHaveBeenCalledWith(
'Child log message',
expect.objectContaining({ correlationId: 'test-123' })
);
});
it('should log requests through middleware', async () => {
const app = express();
app.use(loggingMiddleware.requests());
app.get('/test', (req, res) => res.json({ success: true }));
await request(app).get('/test');
expect(logSpy).toHaveBeenCalledWith(
expect.stringContaining('Request'),
expect.objectContaining({
method: 'GET',
url: '/test'
})
);
});
it('should sanitize sensitive data', () => {
logger.info('User login', {
email: 'user@example.com',
password: 'secret123',
token: 'jwt-token'
});
expect(logSpy).toHaveBeenCalledWith(
'User login',
expect.objectContaining({
email: 'user@example.com',
password: '[REDACTED]',
token: '[REDACTED]'
})
);
});
});
describe('Audit Logger', () => {
it('should log audit events', async () => {
const auditSpy = jest.spyOn(auditLogger, 'log');
await auditLogger.log({
type: 'user_action',
userId: 123,
action: 'login',
ip: '192.168.1.1'
});
expect(auditSpy).toHaveBeenCalledWith(
expect.objectContaining({
type: 'user_action',
userId: 123,
action: 'login'
})
);
});
});
```
```bash
npm test -- logging
```
## Dependencies
- winston (Core logging library)
- winston-daily-rotate-file (Log rotation)
- morgan (HTTP request logging)
- cls-hooked (Continuation-local storage for context)
- elasticsearch (Elasticsearch integration)
- @sentry/node (Error tracking)
- node-schedule (Task scheduling)
- compression (Log compression)
## Integration
- Integrates with all modules for comprehensive logging
- Provides middleware for Express applications
- Works with monitoring module for metrics
- Integrates with auth module for user context
- Supports database module for persistent logging
## Error Handling
- `LoggingError`: General logging error
- `LogRotationError`: Log rotation failed
- `LogTransportError`: Log transport failed
- `LogFormatError`: Log formatting error
- `LogRetentionError`: Log retention/archiving failed
## Best Practices
1. **Structured Logging**: Use structured data for better searchability
2. **Log Levels**: Use appropriate log levels for different events
3. **Sensitive Data**: Always sanitize sensitive information
4. **Performance**: Use async logging for high-throughput applications
5. **Correlation**: Include correlation IDs for request tracing
6. **Retention**: Implement proper log retention policies
7. **Monitoring**: Monitor log volume and error rates
8. **Security**: Protect log files and transmission
## License
MIT
## 🔗 Enlaces
- [Volver al índice de módulos](./README.md)
- [Documentación principal](../README.md)
- [Código fuente](../../modules/logging/)