backend-mcp
Version:
Generador automático de backends con Node.js, Express, Prisma y módulos configurables. Servidor MCP compatible con npx para agentes IA. Soporta PostgreSQL, MySQL, MongoDB y SQLite.
1,148 lines (966 loc) • 25.7 kB
Markdown
# 📦 Módulo docker
**Versión:** 1.0.0
**Categoría:** deployment
**Descripción:** Containerización completa con Docker y Docker Compose para desarrollo y producción
## 📊 Estado del Módulo
| Componente | Estado |
|------------|--------|
| Script de inicialización | ✅ Disponible |
| Templates | ❌ Faltante |
| Ejemplos | ❌ Faltante |
## 🔗 Dependencias
### Opcionales
- `database`
- `cache`
- `logging`
- `email`
## 🤖 Triggers para IA
Este módulo se activa automáticamente cuando se detectan las siguientes palabras clave:
- **needs_containerization**: true
- **wants_docker**: true
- **deployment_ready**: true
- **microservices_architecture**: true
## ✨ Características
- multi-stage-builds
- development-compose
- production-compose
- nginx-reverse-proxy
- ssl-termination
- health-checks
- volume-management
- network-isolation
- environment-specific-configs
- docker-secrets
- auto-restart-policies
- resource-limits
## 📖 Documentación Completa
# Docker Module
Comprehensive Docker containerization and orchestration module for MCP Backend framework.
## Features
- 🐳 Multi-stage Docker builds
- 🔧 Development and production configurations
- 🚀 Docker Compose orchestration
- 📊 Health checks and monitoring
- 🔒 Security best practices
- 📈 Performance optimization
- 🌐 Multi-architecture support
- 🔄 Auto-scaling configurations
- 📝 Comprehensive logging
- 🛡️ Network security
## Installation
This module is automatically installed when using the MCP Backend Generator.
## Configuration
### Environment Variables
**Docker Configuration:**
- `DOCKER_REGISTRY` (optional) - Docker registry URL (default: docker.io)
- `DOCKER_NAMESPACE` (optional) - Docker namespace/organization
- `DOCKER_TAG` (optional) - Docker image tag (default: latest)
- `DOCKER_PLATFORM` (optional) - Target platform (default: linux/amd64)
**Application Configuration:**
- `NODE_ENV` - Environment (development, staging, production)
- `PORT` - Application port (default: 3000)
- `HOST` - Application host (default: 0.0.0.0)
- `WORKERS` (optional) - Number of worker processes (default: auto)
**Database Configuration:**
- `DATABASE_URL` - Database connection string
- `REDIS_URL` - Redis connection string
- `DB_POOL_SIZE` (optional) - Database connection pool size (default: 10)
**Security Configuration:**
- `JWT_SECRET` - JWT secret key
- `ENCRYPTION_KEY` - Data encryption key
- `CORS_ORIGIN` - CORS allowed origins
- `RATE_LIMIT_WINDOW` (optional) - Rate limiting window (default: 15m)
### Docker Configuration Files
```dockerfile
# Dockerfile
FROM node:18-alpine AS base
# Install dependencies only when needed
FROM base AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Copy package files
COPY package.json package-lock.json* ./
RUN npm ci --only=production && npm cache clean --force
# Development stage
FROM base AS dev
WORKDIR /app
COPY package.json package-lock.json* ./
RUN npm ci
COPY . .
EXPOSE 3000
CMD ["npm", "run", "dev"]
# Build stage
FROM base AS builder
WORKDIR /app
COPY package.json package-lock.json* ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM base AS production
WORKDIR /app
# Create non-root user
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nodejs
# Copy built application
COPY --from=builder --chown=nodejs:nodejs /app/dist ./dist
COPY --from=deps --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --chown=nodejs:nodejs package.json ./
# Security: Remove unnecessary packages
RUN apk del --purge
RUN rm -rf /var/cache/apk/*
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
CMD node dist/healthcheck.js || exit 1
USER nodejs
EXPOSE 3000
CMD ["node", "dist/index.js"]
```
```yaml
# docker-compose.yml
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
target: production
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://user:password@postgres:5432/myapp
- REDIS_URL=redis://redis:6379
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- app-network
restart: unless-stopped
deploy:
resources:
limits:
memory: 512M
cpus: '0.5'
reservations:
memory: 256M
cpus: '0.25'
healthcheck:
test: ["CMD", "node", "dist/healthcheck.js"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
postgres:
image: postgres:15-alpine
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
- ./docker/postgres/init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- app-network
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d myapp"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
command: redis-server --appendonly yes --requirepass redispassword
volumes:
- redis_data:/data
networks:
- app-network
restart: unless-stopped
healthcheck:
test: ["CMD", "redis-cli", "--raw", "incr", "ping"]
interval: 10s
timeout: 3s
retries: 5
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./docker/nginx/ssl:/etc/nginx/ssl
depends_on:
- app
networks:
- app-network
restart: unless-stopped
volumes:
postgres_data:
redis_data:
networks:
app-network:
driver: bridge
```
```yaml
# docker-compose.dev.yml
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
target: dev
ports:
- "3000:3000"
- "9229:9229" # Debug port
environment:
- NODE_ENV=development
- DATABASE_URL=postgresql://user:password@postgres:5432/myapp_dev
- REDIS_URL=redis://redis:6379
volumes:
- .:/app
- /app/node_modules
depends_on:
- postgres
- redis
networks:
- app-network
command: npm run dev:debug
postgres:
image: postgres:15-alpine
environment:
POSTGRES_DB: myapp_dev
POSTGRES_USER: user
POSTGRES_PASSWORD: password
ports:
- "5432:5432"
volumes:
- postgres_dev_data:/var/lib/postgresql/data
networks:
- app-network
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_dev_data:/data
networks:
- app-network
adminer:
image: adminer
ports:
- "8080:8080"
depends_on:
- postgres
networks:
- app-network
volumes:
postgres_dev_data:
redis_dev_data:
networks:
app-network:
driver: bridge
```
## Usage
### Development Environment
```bash
# Start development environment
docker-compose -f docker-compose.dev.yml up -d
# View logs
docker-compose -f docker-compose.dev.yml logs -f app
# Execute commands in container
docker-compose -f docker-compose.dev.yml exec app npm run migrate
docker-compose -f docker-compose.dev.yml exec app npm run seed
# Stop development environment
docker-compose -f docker-compose.dev.yml down
# Stop and remove volumes
docker-compose -f docker-compose.dev.yml down -v
```
### Production Environment
```bash
# Build production image
docker build -t myapp:latest .
# Start production environment
docker-compose up -d
# Scale application
docker-compose up -d --scale app=3
# Update application
docker-compose pull
docker-compose up -d
# View logs
docker-compose logs -f
# Monitor resources
docker stats
```
### Docker Commands
```bash
# Build multi-architecture image
docker buildx build --platform linux/amd64,linux/arm64 -t myapp:latest .
# Push to registry
docker tag myapp:latest registry.example.com/myapp:latest
docker push registry.example.com/myapp:latest
# Run security scan
docker scan myapp:latest
# Inspect image
docker inspect myapp:latest
# Check image size
docker images myapp:latest
# Remove unused images
docker image prune -f
# Remove unused volumes
docker volume prune -f
```
### Health Checks
```typescript
// src/healthcheck.ts
import http from 'http';
import { database } from './database';
import { redis } from './cache';
const healthcheck = async () => {
try {
// Check application server
const options = {
hostname: 'localhost',
port: process.env.PORT || 3000,
path: '/health',
method: 'GET',
timeout: 5000
};
await new Promise((resolve, reject) => {
const req = http.request(options, (res) => {
if (res.statusCode === 200) {
resolve(res);
} else {
reject(new Error(`Health check failed with status ${res.statusCode}`));
}
});
req.on('error', reject);
req.on('timeout', () => reject(new Error('Health check timeout')));
req.setTimeout(5000);
req.end();
});
// Check database connection
await database.raw('SELECT 1');
// Check Redis connection
await redis.ping();
console.log('Health check passed');
process.exit(0);
} catch (error) {
console.error('Health check failed:', error.message);
process.exit(1);
}
};
healthcheck();
```
```typescript
// src/routes/health.ts
import { Router } from 'express';
import { database } from '../database';
import { redis } from '../cache';
const router = Router();
router.get('/health', async (req, res) => {
const health = {
status: 'ok',
timestamp: new Date().toISOString(),
uptime: process.uptime(),
memory: process.memoryUsage(),
services: {
database: 'unknown',
redis: 'unknown',
external: 'unknown'
}
};
try {
// Check database
await database.raw('SELECT 1');
health.services.database = 'healthy';
} catch (error) {
health.services.database = 'unhealthy';
health.status = 'degraded';
}
try {
// Check Redis
await redis.ping();
health.services.redis = 'healthy';
} catch (error) {
health.services.redis = 'unhealthy';
health.status = 'degraded';
}
try {
// Check external services
// Add your external service checks here
health.services.external = 'healthy';
} catch (error) {
health.services.external = 'unhealthy';
health.status = 'degraded';
}
const statusCode = health.status === 'ok' ? 200 : 503;
res.status(statusCode).json(health);
});
router.get('/ready', async (req, res) => {
try {
// Check if application is ready to serve traffic
await database.raw('SELECT 1');
await redis.ping();
res.status(200).json({
status: 'ready',
timestamp: new Date().toISOString()
});
} catch (error) {
res.status(503).json({
status: 'not ready',
error: error.message,
timestamp: new Date().toISOString()
});
}
});
router.get('/live', (req, res) => {
// Simple liveness check
res.status(200).json({
status: 'alive',
timestamp: new Date().toISOString()
});
});
export default router;
```
### Nginx Configuration
```nginx
# docker/nginx/nginx.conf
events {
worker_connections 1024;
}
http {
upstream app {
least_conn;
server app:3000 max_fails=3 fail_timeout=30s;
}
# Rate limiting
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;
# Security headers
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
server {
listen 80;
server_name localhost;
# Redirect HTTP to HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name localhost;
# SSL configuration
ssl_certificate /etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
# Gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css text/xml text/javascript application/javascript application/xml+rss application/json;
# Health checks
location /health {
proxy_pass http://app;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# API routes with rate limiting
location /api/ {
limit_req zone=api burst=20 nodelay;
proxy_pass http://app;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
}
# Login endpoint with stricter rate limiting
location /api/auth/login {
limit_req zone=login burst=5 nodelay;
proxy_pass http://app;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Static files
location /static/ {
expires 1y;
add_header Cache-Control "public, immutable";
proxy_pass http://app;
}
}
}
```
### Docker Swarm Configuration
```yaml
# docker-stack.yml
version: '3.8'
services:
app:
image: myapp:latest
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://user:password@postgres:5432/myapp
- REDIS_URL=redis://redis:6379
networks:
- app-network
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
failure_action: rollback
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
resources:
limits:
memory: 512M
cpus: '0.5'
reservations:
memory: 256M
cpus: '0.25'
placement:
constraints:
- node.role == worker
healthcheck:
test: ["CMD", "node", "dist/healthcheck.js"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
postgres:
image: postgres:15-alpine
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD_FILE: /run/secrets/postgres_password
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- app-network
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
secrets:
- postgres_password
redis:
image: redis:7-alpine
command: redis-server --appendonly yes --requirepass-file /run/secrets/redis_password
volumes:
- redis_data:/data
networks:
- app-network
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
secrets:
- redis_password
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf
networks:
- app-network
deploy:
replicas: 2
update_config:
parallelism: 1
delay: 10s
placement:
constraints:
- node.role == worker
volumes:
postgres_data:
redis_data:
networks:
app-network:
driver: overlay
attachable: true
secrets:
postgres_password:
external: true
redis_password:
external: true
```
### Kubernetes Configuration
```yaml
# k8s/deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:latest
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: "production"
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: myapp-secrets
key: database-url
- name: REDIS_URL
valueFrom:
secretKeyRef:
name: myapp-secrets
key: redis-url
resources:
limits:
memory: "512Mi"
cpu: "500m"
requests:
memory: "256Mi"
cpu: "250m"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
```
### Monitoring and Logging
```yaml
# docker-compose.monitoring.yml
version: '3.8'
services:
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./docker/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--storage.tsdb.retention.time=200h'
- '--web.enable-lifecycle'
networks:
- monitoring
grafana:
image: grafana/grafana:latest
ports:
- "3001:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
volumes:
- grafana_data:/var/lib/grafana
- ./docker/grafana/provisioning:/etc/grafana/provisioning
networks:
- monitoring
node-exporter:
image: prom/node-exporter:latest
ports:
- "9100:9100"
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.rootfs=/rootfs'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
networks:
- monitoring
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
ports:
- "8081:8080"
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
networks:
- monitoring
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.5.0
environment:
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=false
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
ports:
- "9200:9200"
networks:
- monitoring
kibana:
image: docker.elastic.co/kibana/kibana:8.5.0
ports:
- "5601:5601"
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
depends_on:
- elasticsearch
networks:
- monitoring
logstash:
image: docker.elastic.co/logstash/logstash:8.5.0
volumes:
- ./docker/logstash/pipeline:/usr/share/logstash/pipeline
ports:
- "5044:5044"
environment:
- "LS_JAVA_OPTS=-Xmx256m -Xms256m"
depends_on:
- elasticsearch
networks:
- monitoring
volumes:
prometheus_data:
grafana_data:
elasticsearch_data:
networks:
monitoring:
driver: bridge
```
## Security Best Practices
### Dockerfile Security
```dockerfile
# Security-hardened Dockerfile
FROM node:18-alpine AS base
# Install security updates
RUN apk update && apk upgrade && apk add --no-cache dumb-init
# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nodejs -u 1001
# Set working directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production && npm cache clean --force
# Copy application code
COPY --chown=nodejs:nodejs . .
# Remove unnecessary files
RUN rm -rf .git .gitignore README.md docker-compose*.yml Dockerfile*
# Set file permissions
RUN chmod -R 755 /app
RUN chmod -R 644 /app/package*.json
# Switch to non-root user
USER nodejs
# Expose port
EXPOSE 3000
# Use dumb-init for proper signal handling
ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "dist/index.js"]
```
### Security Scanning
```bash
#!/bin/bash
# scripts/security-scan.sh
echo "Running security scans..."
# Scan Docker image for vulnerabilities
echo "Scanning Docker image..."
docker scan myapp:latest
# Scan dependencies for vulnerabilities
echo "Scanning dependencies..."
npm audit
# Run static code analysis
echo "Running static analysis..."
npx eslint src/ --ext .ts,.js
# Check for secrets in code
echo "Checking for secrets..."
git secrets --scan
# Validate Docker configuration
echo "Validating Docker configuration..."
docker run --rm -i hadolint/hadolint < Dockerfile
echo "Security scan completed."
```
## Performance Optimization
### Multi-stage Build Optimization
```dockerfile
# Optimized multi-stage build
FROM node:18-alpine AS dependencies
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
FROM node:18-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
RUN npm prune --production
FROM node:18-alpine AS runtime
WORKDIR /app
RUN addgroup -g 1001 -S nodejs && adduser -S nodejs -u 1001
COPY --from=dependencies --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --from=build --chown=nodejs:nodejs /app/dist ./dist
COPY --chown=nodejs:nodejs package.json ./
USER nodejs
EXPOSE 3000
CMD ["node", "dist/index.js"]
```
### Resource Optimization
```yaml
# Resource-optimized compose file
version: '3.8'
services:
app:
image: myapp:latest
deploy:
resources:
limits:
memory: 512M
cpus: '0.5'
reservations:
memory: 256M
cpus: '0.25'
environment:
- NODE_OPTIONS=--max-old-space-size=400
ulimits:
nofile:
soft: 65536
hard: 65536
```
## Testing
```typescript
// tests/docker.test.ts
import { execSync } from 'child_process';
import request from 'supertest';
describe('Docker Integration Tests', () => {
beforeAll(async () => {
// Build test image
execSync('docker build -t myapp:test .', { stdio: 'inherit' });
// Start test containers
execSync('docker-compose -f docker-compose.test.yml up -d', { stdio: 'inherit' });
// Wait for services to be ready
await new Promise(resolve => setTimeout(resolve, 30000));
});
afterAll(() => {
// Clean up test containers
execSync('docker-compose -f docker-compose.test.yml down -v', { stdio: 'inherit' });
});
it('should respond to health checks', async () => {
const response = await request('http://localhost:3000')
.get('/health')
.expect(200);
expect(response.body.status).toBe('ok');
});
it('should handle API requests', async () => {
const response = await request('http://localhost:3000')
.get('/api/users')
.expect(200);
expect(Array.isArray(response.body.data)).toBe(true);
});
});
```
## Dependencies
- Docker Engine 20.10+
- Docker Compose 2.0+
- Node.js 18+ (Alpine)
- PostgreSQL 15 (Alpine)
- Redis 7 (Alpine)
- Nginx (Alpine)
## Integration
- Integrates with all modules for containerization
- Provides development and production environments
- Supports CI/CD pipelines
- Works with orchestration platforms
- Includes monitoring and logging setup
## Best Practices
1. **Multi-stage Builds**: Use multi-stage builds for smaller images
2. **Security**: Run containers as non-root users
3. **Health Checks**: Implement comprehensive health checks
4. **Resource Limits**: Set appropriate resource limits
5. **Secrets Management**: Use Docker secrets or external secret managers
6. **Logging**: Centralize logging with structured formats
7. **Monitoring**: Implement comprehensive monitoring
8. **Updates**: Keep base images and dependencies updated
## License
MIT
## 🔗 Enlaces
- [Volver al índice de módulos](./README.md)
- [Documentación principal](../README.md)
- [Código fuente](../../modules/docker/)