@kadi.build/local-remote-file-manager-ability
Version:
Local & Remote File Management System with S3-compatible container registry, HTTP server provider, file streaming, and comprehensive testing suite
1,430 lines (1,171 loc) โข 62.8 kB
Markdown
# Local & Remote File Manager
A comprehensive Node.js CLI tool and library for local file management with advanced features including **real-time file watching**, **compression/decompression**, **secure temporary file sharing via tunneling**, and **S3-compatible object storage**. This unified file management system provides powerful local operations with the capability to extend to remote server operations in future releases.
[](https://opensource.org/licenses/MIT)
[](https://nodejs.org/)
[](#test-results)
## ๐ Features
- **๐ Complete Local File Management**: Full CRUD operations for files and folders with advanced path handling
- **๐๏ธ Real-time File Watching**: Monitor file and directory changes with event filtering and callbacks
- **๐๏ธ Advanced Compression**: ZIP and TAR.GZ compression/decompression with progress tracking
- **๐ Secure File Sharing**: Temporary URL generation with tunnel-based sharing (ngrok/localtunnel integration)
- **๐ HTTP Server Provider**: Complete HTTP server management with static file serving and tunnel integration
- **โก Enhanced File Streaming**: Optimized file streaming with range requests and progress tracking
- **๐ S3-Compatible Object Storage**: Full S3 endpoints with authentication, bucket mapping, and analytics
- **๐ Real-Time Monitoring Dashboard**: Live progress tracking with visual dashboard and download analytics
- **๐ Auto-Shutdown Management**: Intelligent shutdown triggers based on download completion or timeout
- **๐ข Event Notification System**: Comprehensive event system with console, file, and webhook notifications
- **๐ฅ๏ธ Production-Ready CLI**: Complete command-line interface for all features with interactive help
- **โก High Performance**: Efficient memory usage, streaming for large files, and batch operations
- **๐ ๏ธ CLI & Library**: Use as command-line tool or integrate as Node.js library
- **๐ง Robust Error Handling**: Comprehensive error handling and retry logic
- **๐ Progress Tracking**: Real-time progress for long-running operations
- **๐ฏ Path Management**: Automatic folder creation and path normalization
- **๐งช Comprehensive Testing**: Full test suite with 225/225 tests passing (100% success rate)
## ๐ Table of Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Development Status](#development-status)
- [CLI Usage](#cli-usage)
- [Library Usage](#library-usage)
- [Testing](#testing)
- [API Reference](#api-reference)
- [Performance](#performance)
- [Contributing](#contributing)
## ๐ Installation
### As a CLI Tool
```bash
git clone <repository-url>
cd local-remote-file-manager-ability
npm install
npm run setup
```
### Global Installation
```bash
npm install -g local-remote-file-manager
```
### As a Node.js Library
```bash
npm install local-remote-file-manager
```
```javascript
const { createManager, compressFile } = require('local-remote-file-manager');
// Quick start - factory functions
const manager = await createManager();
const files = await manager.getProvider('local').list('./');
// Quick compression
await compressFile('./my-folder', './archive.zip');
```
๐ **See [USAGE.md](./USAGE.md) for complete examples and [INTEGRATION-EXAMPLE.md](./INTEGRATION-EXAMPLE.md) for real-world integration patterns.**
## โก Quick Start
### CLI Quick Start
1. **Install dependencies**
```bash
npm install
```
2. **Test your setup**
```bash
npm test
# or test specific features
npm run test:cli
npm run test:local
npm run test:s3
```
3. **Basic file operations**
```bash
node index.js copy --source document.pdf --target ./uploads/document.pdf
node index.js upload --file data.zip --target ./uploads/
node index.js list --directory ./uploads
```
4. **Start S3-compatible file server**
```bash
node index.js serve-s3 ./files --port 5000 --auth
# Access at http://localhost:5000/default/<filename>
```
5. **S3 server with bucket mapping**
```bash
node index.js serve-s3 ./storage \
--bucket containers:./storage/containers \
--bucket images:./storage/images \
--port 9000 --auth --tunnel
```
6. **File server with auto-shutdown**
```bash
node index.js serve-s3 ./content --port 8000 \
--auto-shutdown --shutdown-delay 30000
```
7. **Start watching a directory**
```bash
node index.js watch ./documents
```
8. **Compress files**
```bash
node index.js compress --file ./large-file.txt --output ./compressed.zip
```
9. **Share a file temporarily**
```bash
node index.js share ./document.pdf --expires 30m
```
## ๐ Development Status
### โ
Completed Phases (Production Ready)
| Phase | Status | Features | Test Results |
|--------|--------|----------|--------------|
| **Phase 1: Foundation & Local CRUD** | โ
Complete | File/folder CRUD, path management, search operations | 33/33 tests passing (100%) |
| **Phase 2: File/Directory Watching** | โ
Complete | Real-time monitoring, event filtering, recursive watching | 24/24 tests passing (100%) |
| **Phase 3: Compression/Decompression** | โ
Complete | ZIP/TAR.GZ support, batch operations, progress tracking | 30/30 tests passing (100%) |
| **Phase 4: Tunneling & Temp URLs** | โ
Complete | Secure file sharing, temporary URLs, multiple tunnel services | 35/35 tests passing (100%) |
| **Phase 5: HTTP Server Provider** | โ
Complete | HTTP server management, static file serving, tunnel integration | 22/22 tests passing (100%) |
| **Phase 6: File Streaming Enhancement** | โ
Complete | Enhanced streaming, range requests, MIME detection, progress tracking | 12/12 tests passing (100%) |
| **Phase 7: S3 Object Storage** | โ
Complete | S3-compatible endpoints, authentication, bucket/key mapping, analytics | 31/31 tests passing (100%) |
| **Phase 8: Auto-Shutdown & Monitoring** | โ
Complete | Real-time monitoring dashboard, auto-shutdown triggers, event notifications | 22/22 tests passing (100%) |
| **Phase 9: CLI Integration** | โ
Complete | Complete CLI interface for all features, S3 server commands, validation | 16/16 tests passing (100%) |
### ๐ฏ Overall Project Health
- **Total Tests**: 225/225 automated tests passing
- **Pass Rate**: 100% across all implemented features
- **Code Coverage**: Comprehensive test coverage for all providers
- **Performance**: Optimized for large files and high-volume operations
- **Stability**: Production-ready with full CLI integration
## ๐ป CLI Usage
### ๐ง System Commands
**System information and validation:**
```bash
node index.js --help # Show all available commands
node index.js test # Test all providers
node index.js test --provider local # Test specific provider
node index.js validate # Validate configuration
node index.js info # Show system information
```
### ๐ File Operations
**Basic file management:**
```bash
# Upload/copy files
node index.js upload --file document.pdf --target ./uploads/document.pdf
node index.js copy --source ./file.pdf --target ./backup/file.pdf
# Download files (local copy)
node index.js download --source ./uploads/document.pdf --target ./downloads/
# Move and rename files
node index.js move --source ./file.pdf --target ./archive/file.pdf
node index.js rename --file ./old-name.pdf --name new-name.pdf
# Delete files
node index.js delete --file ./old-file.pdf --yes
# List and search files
node index.js list --directory ./uploads
node index.js list --directory ./uploads --recursive
node index.js search --query "*.pdf" --directory ./uploads
```
**Folder operations:**
```bash
# Create and manage directories
node index.js mkdir --directory ./new-folder
node index.js ls-folders --directory ./uploads
node index.js rmdir --directory ./old-folder --recursive --yes
```
### ๐๏ธ File Watching
**Start and manage file watching:**
```bash
# Start watching
node index.js watch ./documents # Watch directory
node index.js watch ./file.txt --no-recursive # Watch single file
node index.js watch ./project --events add,change # Filter events
# Manage watchers
node index.js watch-list # List active watchers
node index.js watch-list --verbose # Detailed watcher info
node index.js watch-status # Show watching statistics
node index.js watch-stop ./documents # Stop specific watcher
node index.js watch-stop --all # Stop all watchers
```
### ๐๏ธ Compression Operations
**Compress and decompress files:**
```bash
# Basic compression
node index.js compress --file ./document.pdf --output ./compressed.zip
node index.js compress --file ./folder --output ./archive.tar.gz --format tar.gz
node index.js compress --file ./data --output ./backup.zip --level 9
# Decompression
node index.js decompress --file ./archive.zip --directory ./extracted/
node index.js decompress --file ./backup.tar.gz --directory ./restored/ --overwrite
# Batch operations
node index.js compress-batch --directory ./files --output ./archives/
node index.js decompress-batch --directory ./archives --output ./extracted/
# Compression status
node index.js compression-status
```
### ๐ File Sharing & Tunneling
**Share files temporarily:**
```bash
# Basic file sharing
node index.js share ./document.pdf # Default 1h expiration
node index.js share ./folder --expires 30m # 30 minutes
node index.js share ./file.zip --expires 2h # 2 hours
node index.js share ./project --multi-download # Allow multiple downloads
# Advanced sharing options
node index.js share ./data.zip --expires 24h --keep-alive --no-auto-shutdown
# Tunnel management
node index.js tunnel-status # Show active tunnels and URLs
node index.js tunnel-cleanup # Clean up expired URLs and tunnels
```
### ๐ S3-Compatible File Server
**Start S3 server (Core Feature):**
```bash
# Basic S3 server
node index.js serve-s3 ./storage --port 5000
node index.js serve-s3 ./storage --port 5000 --auth # With authentication
# S3 server with bucket mapping
node index.js serve-s3 ./storage \
--bucket containers:./storage/containers \
--bucket images:./storage/images \
--bucket docs:./storage/documents \
--port 9000 --auth
# S3 server with tunnel (public access)
node index.js serve-s3 ./content \
--port 8000 --tunnel --tunnel-service ngrok \
--name my-public-server
# S3 server with monitoring
node index.js serve-s3 ./data \
--port 7000 --monitor --interactive \
--name monitoring-server
```
**S3 server with auto-shutdown:**
```bash
# Auto-shutdown after downloads
node index.js serve-s3 ./container-storage \
--port 9000 --auto-shutdown \
--shutdown-delay 30000 --max-idle 600000
# Background server mode
node index.js serve-s3 ./storage \
--port 5000 --background --name bg-server
# Container registry example
node index.js serve-s3 ./containers \
--bucket containers:./containers \
--bucket registry:./registry \
--port 9000 --auto-shutdown \
--name container-registry
```
**S3 server management:**
```bash
# Server status and control
node index.js server-status # Show all active servers
node index.js server-status --json # JSON output
node index.js server-stop --all # Stop all servers
node index.js server-stop --name my-server # Stop specific server
# Server cleanup
node index.js server-cleanup # Clean up stopped servers
```
### ๐ Real-Time Monitoring
**Monitor server activity:**
```bash
# Real-time monitoring (when server started with --monitor)
# Automatically displays:
# - Active downloads with progress bars
# - Server status and uptime
# - Download completion status
# - Auto-shutdown countdown
# Interactive mode (when server started with --interactive)
# Available commands in interactive mode:
# - status: Show server status
# - downloads: Show active downloads
# - stop: Stop the server
# - help: Show available commands
```
### ๐ NPM Scripts for Development
```bash
# Testing
npm test # Run all tests
npm run test:cli # Test CLI integration
npm run test:local # Test local operations
npm run test:watch # Test file watching
npm run test:compression # Test compression
npm run test:tunnel # Test tunneling
npm run test:http # Test HTTP server
npm run test:streaming # Test file streaming
npm run test:s3 # Test S3 server
npm run test:monitor # Test monitoring/auto-shutdown
# Demos
npm run demo:cli # CLI integration demo
npm run demo:basic # Basic operations demo
npm run demo:watch # File watching demo
npm run demo:compression # Compression demo
npm run demo:tunnel # File sharing demo
npm run demo:container-registry # ๐ณ Container registry demo (simple)
npm run demo:container-registry-full # ๐ณ Container registry demo (full)
npm run demo:container-registry-test # ๐ณ Test container registry components
# Server shortcuts
npm run serve-s3 # Start S3 server on port 5000
npm run server-status # Check server status
npm run server-stop # Stop all servers
# Cleanup
npm run clean # Clean test files
npm run clean:tests # Clean test results
```
### ๐ฏ Common Use Cases
**๐ณ Container Registry Demo (Quick Start):**
```bash
# Run the complete container registry demo
npm run demo:container-registry
# Or with real containers
npm run demo:container-registry-full
# Test the setup first
npm run demo:container-registry-test
```
This demo showcases:
- ๐ณ **Container Export**: Exports Podman/Docker containers to registry format
- ๐ **Public Tunneling**: Creates accessible HTTPS URLs via ngrok
- ๐ **Secure Access**: Generates temporary AWS-style credentials
- ๐ **Real-time Monitoring**: Shows download progress and statistics
- โก **Auto-shutdown**: Automatically cleans up when downloads complete
See [Container Registry Demo](./demos/container-registry-demo/README.md) for complete documentation.
**Container Registry Setup:**
```bash
# Set up S3-compatible container registry
node index.js serve-s3 ./container-storage \
--bucket containers:./container-storage/containers \
--bucket registry:./container-storage/registry \
--port 9000 --auto-shutdown \
--name container-registry
# Access containers at:
# http://localhost:9000/containers/manifest.json
# http://localhost:9000/containers/config.json
# http://localhost:9000/containers/layer1.tar
```
**Public File Sharing:**
```bash
# Share files with public tunnel
node index.js serve-s3 ./public-files \
--port 8000 --tunnel --tunnel-service ngrok \
--bucket files:./public-files \
--name public-share
# Or temporary file sharing
node index.js share ./important-file.zip \
--expires 24h --multi-download
```
**Development File Server:**
```bash
# Development server with monitoring
node index.js serve-s3 ./dev-content \
--port 3000 --monitor --interactive \
--bucket assets:./dev-content/assets \
--bucket uploads:./dev-content/uploads
```
**Automated Backup System:**
```bash
# Watch and compress new files
node index.js watch ./documents &
# In another terminal, set up S3 server for backup access
node index.js serve-s3 ./backups \
--bucket daily:./backups/daily \
--bucket weekly:./backups/weekly \
--port 9090 --auth
```
## ๐ Library Usage
### Installation as Node.js Module
```bash
npm install local-remote-file-manager
```
### Quick Start - Factory Functions
The library provides convenient factory functions for quick setup:
```javascript
const {
createManager,
createS3Server,
compressFile,
watchDirectory
} = require('local-remote-file-manager');
// Quick file operations
async function quickStart() {
// Create a file manager with default config
const manager = await createManager();
// Get providers for different operations
const local = manager.getProvider('local');
const files = await local.list('./my-directory');
// Quick compression
await compressFile('./my-folder', './archive.zip');
// Start file watching
const watcher = await watchDirectory('./watched-folder');
watcher.on('change', (data) => {
console.log('File changed:', data.path);
});
}
```
### Basic Integration
```javascript
const { LocalRemoteManager, ConfigManager } = require('local-remote-file-manager');
class FileManagementApp {
constructor() {
this.config = new ConfigManager();
this.fileManager = null;
}
async initialize() {
await this.config.load();
this.fileManager = new LocalRemoteManager(this.config);
// Set up event handling
this.fileManager.on('fileEvent', (data) => {
console.log('File event:', data.type, data.path);
});
}
async processFile(inputPath, outputPath) {
const local = this.fileManager.getProvider('local');
const compression = this.fileManager.getCompressionProvider();
// Copy file
await local.copy(inputPath, outputPath);
// Compress file
const result = await compression.compress(
outputPath,
outputPath.replace(/\.[^/.]+$/, '.zip')
);
return result;
}
}
```
### S3-Compatible Server
```javascript
const { createS3Server } = require('local-remote-file-manager');
async function createFileServer() {
const server = createS3Server({
port: 5000,
rootDirectory: './storage',
bucketMapping: new Map([
['public', './public-files'],
['private', './private-files']
]),
// Authentication
authentication: {
enabled: true,
tempCredentials: true
},
// Monitoring and auto-shutdown
monitoring: {
enabled: true,
dashboard: true
},
autoShutdown: {
enabled: true,
timeout: 3600000 // 1 hour
}
});
// Event handling
server.on('request', (data) => {
console.log(`${data.method} ${data.path}`);
});
server.on('download', (data) => {
console.log(`Downloaded: ${data.path} (${data.size} bytes)`);
});
await server.start();
console.log('S3 server running on http://localhost:5000');
return server;
}
class ContainerRegistryServer {
constructor() {
this.server = null;
}
async startWithAutoShutdown() {
// Create S3 server with auto-shutdown and monitoring
this.server = new S3HttpServer({
port: 9000,
serverName: 'container-registry',
rootDirectory: './container-storage',
// Auto-shutdown configuration
enableAutoShutdown: true,
shutdownOnCompletion: true,
shutdownTriggers: ['completion', 'timeout', 'manual'],
completionShutdownDelay: 30000, // 30 seconds after completion
maxIdleTime: 600000, // 10 minutes idle
maxTotalTime: 3600000, // 1 hour maximum
// Real-time monitoring
enableRealTimeMonitoring: true,
enableDownloadTracking: true,
monitoringUpdateInterval: 2000, // 2 seconds
// Event notifications
enableEventNotifications: true,
notificationChannels: ['console', 'file'],
// S3 configuration
enableAuth: false, // Simplified for container usage
bucketMapping: new Map([
['containers', 'container-files'],
['registry', 'registry-data']
])
});
// Setup event listeners
this.setupEventListeners();
// Start the server
const result = await this.server.start();
console.log(`๐ Container registry started: ${result.localUrl}`);
// Configure expected downloads for auto-shutdown
await this.configureExpectedDownloads();
return result;
}
setupEventListeners() {
// Download progress tracking
this.server.on('downloadStarted', (info) => {
console.log(`๐ฅ Download started: ${info.bucket}/${info.key} (${this.formatBytes(info.fileSize)})`);
});
this.server.on('downloadCompleted', (info) => {
console.log(`โ
Download completed: ${info.bucket}/${info.key} in ${info.duration}ms`);
});
this.server.on('downloadFailed', (info) => {
console.log(`โ Download failed: ${info.bucket}/${info.key} - ${info.error}`);
});
// Auto-shutdown events
this.server.on('allDownloadsComplete', (info) => {
console.log(`๐ All downloads complete! Auto-shutdown will trigger in ${info.shutdownDelay / 1000}s`);
});
this.server.on('shutdownScheduled', (info) => {
console.log(`โฐ Server shutdown scheduled: ${info.reason} (${Math.round(info.delay / 1000)}s)`);
});
this.server.on('shutdownWarning', (info) => {
console.log(`โ ๏ธ Server shutting down in ${Math.round(info.timeRemaining / 1000)} seconds`);
});
}
async configureExpectedDownloads() {
// Set expected container downloads
const expectedDownloads = [
{ bucket: 'containers', key: 'manifest.json', size: 1024 },
{ bucket: 'containers', key: 'config.json', size: 512 },
{ bucket: 'containers', key: 'layer-1.tar', size: 1048576 }, // 1MB
{ bucket: 'containers', key: 'layer-2.tar', size: 2097152 }, // 2MB
];
const result = this.server.setExpectedDownloads(expectedDownloads);
if (result.success) {
console.log(`๐ Configured ${result.expectedCount} expected downloads (${this.formatBytes(result.totalBytes)} total)`);
}
}
formatBytes(bytes) {
if (bytes === 0) return '0 B';
const k = 1024;
const sizes = ['B', 'KB', 'MB', 'GB'];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + ' ' + sizes[i];
}
async getMonitoringData() {
// Get real-time monitoring data
return {
serverStatus: this.server.getStatus(),
downloadStats: this.server.getDownloadStats(),
dashboardData: this.server.getMonitoringData(),
completionStatus: this.server.getDownloadCompletionStatus()
};
}
async gracefulShutdown() {
console.log('๐ Initiating graceful shutdown...');
await this.server.stop({ graceful: true, timeout: 30000 });
console.log('โ
Server stopped gracefully');
}
}
// Usage example
async function runContainerRegistry() {
const registry = new ContainerRegistryServer();
try {
// Start server with monitoring
await registry.startWithAutoShutdown();
// Server will automatically shut down when all expected downloads complete
// or after timeout periods are reached
// Manual shutdown if needed
process.on('SIGINT', async () => {
await registry.gracefulShutdown();
process.exit(0);
});
} catch (error) {
console.error('Failed to start container registry:', error);
}
}
```
### Advanced File Watching
```javascript
const { LocalRemoteManager } = require('local-remote-file-manager');
class DocumentWatcher {
constructor() {
this.fileManager = new LocalRemoteManager();
this.setupEventHandlers();
}
setupEventHandlers() {
this.fileManager.on('fileAdded', (event) => {
console.log(`New file detected: ${event.filePath}`);
this.processNewFile(event.filePath);
});
this.fileManager.on('fileChanged', (event) => {
console.log(`File modified: ${event.filePath}`);
this.handleFileChange(event.filePath);
});
this.fileManager.on('fileRemoved', (event) => {
console.log(`File deleted: ${event.filePath}`);
this.handleFileRemoval(event.filePath);
});
}
async startWatching(directory) {
const watchResult = await this.fileManager.startWatching(directory, {
recursive: true,
events: ['add', 'change', 'unlink'],
ignoreDotfiles: true
});
console.log(`Started watching: ${watchResult.watchId}`);
return watchResult;
}
async processNewFile(filePath) {
try {
// Auto-compress large files
const fileInfo = await this.fileManager.getFileInfo(filePath);
if (fileInfo.size > 10 * 1024 * 1024) { // 10MB
const compressedPath = filePath + '.zip';
await this.fileManager.compressFile(filePath, compressedPath);
console.log(`Auto-compressed large file: ${compressedPath}`);
}
} catch (error) {
console.error(`Failed to process new file: ${error.message}`);
}
}
}
```
### HTTP Server with Tunnel Integration
```javascript
const { LocalRemoteManager } = require('local-remote-file-manager');
class FileServerApp {
constructor() {
this.fileManager = new LocalRemoteManager();
}
async startTunneledServer(contentDirectory) {
// Create HTTP server with automatic tunnel
const serverInfo = await this.fileManager.createTunneledServer({
port: 3000,
rootDirectory: contentDirectory,
tunnelService: 'serveo' // or 'ngrok'
});
console.log(`๐ Local server: http://localhost:${serverInfo.port}`);
console.log(`๐ Public URL: ${serverInfo.tunnelUrl}`);
return serverInfo;
}
async addCustomRoutes(serverId) {
// Add high-priority custom routes that override generic patterns
await this.fileManager.addCustomRoute(
serverId,
'GET',
'/api/status',
(req, res) => {
res.json({ status: 'active', timestamp: new Date().toISOString() });
},
{ priority: 100 } // High priority overrides generic /:bucket/* routes
);
// Add API endpoint with medium priority
await this.fileManager.addCustomRoute(
serverId,
'GET',
'/api/:version/health',
(req, res) => {
res.json({ health: 'ok', version: req.params.version });
},
{ priority: 50 }
);
// Lower priority route (will be handled after higher priority routes)
await this.fileManager.addCustomRoute(
serverId,
'GET',
'/docs/:page',
(req, res) => {
res.send(`Documentation page: ${req.params.page}`);
},
{ priority: 10 }
);
console.log('โ
Custom routes added with priority-based routing');
}
async serveFiles() {
const server = await this.startTunneledServer('./public-files');
// Add custom routes with priority system
await this.addCustomRoutes(server.serverId);
// Monitor server status
setInterval(async () => {
const status = await this.fileManager.getServerStatus(server.serverId);
console.log(`๐ Server status: ${status.status}, Requests: ${status.requestCount}`);
}, 30000);
return server;
}
}
```
### S3-Compatible Object Storage
```javascript
const { S3HttpServer } = require('local-remote-file-manager/src/s3Server');
class S3ObjectStorage {
constructor() {
this.s3Server = new S3HttpServer({
port: 9000,
serverName: 'my-s3-server',
rootDirectory: './s3-storage',
enableAuth: true,
bucketMapping: new Map([
['documents', 'user-docs'],
['images', 'media/images'],
['backups', 'backup-storage']
]),
bucketAccessControl: new Map([
['documents', { read: true, write: true }],
['images', { read: true, write: false }],
['backups', { read: true, write: true }]
])
});
}
async start() {
const serverInfo = await this.s3Server.start();
console.log(`๐๏ธ S3 Server running on port ${serverInfo.port}`);
// Generate temporary credentials
const credentials = this.s3Server.generateTemporaryCredentials({
permissions: ['read', 'write'],
buckets: ['documents', 'backups'],
expiryMinutes: 60
});
console.log(`๐ Access Key: ${credentials.accessKey}`);
console.log(`๐ Secret Key: ${credentials.secretKey}`);
return { serverInfo, credentials };
}
async enableMonitoring() {
// Start real-time monitoring dashboard
this.s3Server.startMonitoringDashboard({
updateInterval: 2000,
showServerStats: true,
showDownloadProgress: true,
showActiveDownloads: true,
showShutdownStatus: true
});
// Setup download analytics
this.s3Server.on('downloadCompleted', (info) => {
console.log(`๐ Download analytics: ${info.key} (${info.bytes} bytes in ${info.duration}ms)`);
// Get real-time dashboard data
const dashboardData = this.s3Server.getMonitoringData();
console.log(`๐ Total downloads: ${dashboardData.downloadStats.totalDownloads}`);
console.log(`โก Average speed: ${this.formatSpeed(dashboardData.downloadStats.averageSpeed)}`);
});
return true;
}
formatSpeed(bytesPerSecond) {
if (bytesPerSecond < 1024) return `${bytesPerSecond} B/s`;
if (bytesPerSecond < 1024 * 1024) return `${(bytesPerSecond / 1024).toFixed(1)} KB/s`;
return `${(bytesPerSecond / (1024 * 1024)).toFixed(1)} MB/s`;
}
}
// CLI Equivalent Commands:
// Instead of complex library setup, use simple CLI commands:
// Start S3 server with authentication and bucket mapping
// node index.js serve-s3 ./s3-storage --port 9000 --auth \
// --bucket documents:./s3-storage/user-docs \
// --bucket images:./s3-storage/media/images \
// --bucket backups:./s3-storage/backup-storage \
// --monitor --name my-s3-server
// S3 server with auto-shutdown for container registry
// node index.js serve-s3 ./containers --port 9000 \
// --bucket containers:./containers \
// --auto-shutdown --monitor --name container-registry
// S3 server with tunnel for public access
// node index.js serve-s3 ./public-files --port 8000 \
// --tunnel --tunnel-service ngrok --bucket files:./public-files
// Access via S3-compatible endpoints:
// GET http://localhost:9000/documents/myfile.pdf
// HEAD http://localhost:9000/images/photo.jpg
```
### CLI Integration Examples
The CLI provides direct access to all library features with simple commands:
```javascript
// Library approach (complex setup):
const server = new S3HttpServer({
enableAutoShutdown: true,
shutdownTriggers: ['completion'],
completionShutdownDelay: 30000,
enableRealTimeMonitoring: true
});
await server.start();
// CLI approach (simple command):
// node index.js serve-s3 ./storage --auto-shutdown --shutdown-delay 30000 --monitor
// Multiple operations with library require coordination:
// 1. Set up file watcher
// 2. Set up compression handler
// 3. Set up S3 server
// 4. Coordinate between them
// CLI approach - each command handles coordination:
// Terminal 1: node index.js watch ./documents
// Terminal 2: node index.js serve-s3 ./storage --port 9000 --monitor
// Terminal 3: node index.js compress-batch --directory ./documents --output ./archives/
```
### Container Registry Use Case
```bash
# Complete container registry setup with CLI:
mkdir -p ./container-storage/containers ./container-storage/registry
# Start S3-compatible container registry
node index.js serve-s3 ./container-storage \
--bucket containers:./container-storage/containers \
--bucket registry:./container-storage/registry \
--port 9000 --auto-shutdown --monitor \
--name container-registry
# Server automatically shuts down after container downloads complete
# Real-time monitoring shows download progress and completion status
# Access containers at: http://localhost:9000/containers/<filename>
```
```
### Real-Time Monitoring Dashboard
```javascript
const { MonitoringDashboard, DownloadMonitor } = require('local-remote-file-manager');
class LiveMonitoringSystem {
constructor() {
this.dashboard = new MonitoringDashboard({
updateInterval: 1000,
showServerStats: true,
showDownloadProgress: true,
showActiveDownloads: true,
showShutdownStatus: true
});
this.downloadMonitor = new DownloadMonitor({
trackPartialDownloads: true,
progressUpdateInterval: 1000
});
}
async startMonitoring(s3Server) {
// Connect monitoring to S3 server
this.dashboard.connectToServer(s3Server);
this.downloadMonitor.connectToServer(s3Server);
// Start real-time dashboard
this.dashboard.start();
// Setup download tracking
this.downloadMonitor.on('downloadStarted', (info) => {
this.dashboard.addActiveDownload(info);
});
this.downloadMonitor.on('downloadProgress', (info) => {
this.dashboard.updateDownloadProgress(info.downloadId, info);
});
this.downloadMonitor.on('downloadCompleted', (info) => {
this.dashboard.completeDownload(info.downloadId, info);
});
// Example dashboard output:
/*
+------------------------------------------------------------------------------------------------------------------------------+
| S3 Object Storage Server |
+------------------------------------------------------------------------------------------------------------------------------+
|Status: RUNNING Uptime: 45s|
|Port: 9000 Public URL: http://localhost:9000|
+------------------------------------------------------------------------------------------------------------------------------+
|Downloads Progress |
|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 3/5 (60%) |
|Active Downloads: 2 Completed: 3 Failed: 0|
|Speed: 1.2 MB/s Total: 2.1 MB |
+------------------------------------------------------------------------------------------------------------------------------+
|Active Downloads |
|โถ layer-2.tar (1.2 MB) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 80% @ 450 KB/s |
|โถ layer-3.tar (512 KB) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 45% @ 230 KB/s |
+------------------------------------------------------------------------------------------------------------------------------+
|Auto-Shutdown: ON Trigger: Completion + 30s|
|Next Check: 00:00:05 Status: Monitoring|
+------------------------------------------------------------------------------------------------------------------------------+
*/
console.log('๐ Real-time monitoring dashboard started');
return true;
}
async stopMonitoring() {
this.dashboard.stop();
this.downloadMonitor.stop();
console.log('๐ Monitoring stopped');
}
async getAnalytics() {
return {
dashboard: this.dashboard.getCurrentData(),
downloads: this.downloadMonitor.getStatistics(),
performance: this.downloadMonitor.getPerformanceMetrics()
};
}
}
```
}
async enableMonitoring() {
// Start real-time monitoring dashboard
this.s3Server.startRealTimeMonitoring({
interval: 1000,
enableConsole: true
});
// Track download events
this.s3Server.on('download:started', (info) => {
console.log(`๐ฅ Download started: ${info.bucket}/${info.key}`);
});
this.s3Server.on('download:completed', (info) => {
console.log(`โ
Download completed: ${info.bucket}/${info.key} (${info.size} bytes)`);
});
}
async getAnalytics() {
const analytics = this.s3Server.generateDownloadAnalytics({
includeDetails: true
});
console.log(`๐ Total Downloads: ${analytics.summary.totalDownloads}`);
console.log(`๐ Average Speed: ${analytics.performance.averageSpeed}`);
console.log(`โฑ๏ธ Server Uptime: ${analytics.performance.uptime}s`);
return analytics;
}
}
// Usage
const storage = new S3ObjectStorage();
await storage.start();
await storage.enableMonitoring();
// Access via S3-compatible endpoints:
// GET http://localhost:9000/documents/myfile.pdf
// HEAD http://localhost:9000/images/photo.jpg
```
### Enhanced File Streaming
```javascript
const { FileStreamingUtils, DownloadTracker } = require('local-remote-file-manager');
class StreamingFileServer {
async serveFileWithProgress(filePath, response, rangeHeader = null) {
try {
// Get file information
const fileInfo = await FileStreamingUtils.getFileInfo(filePath);
console.log(`๐ Serving: ${fileInfo.name} (${fileInfo.size} bytes)`);
// Create download tracker
const tracker = new DownloadTracker(fileInfo.size);
// Handle range request if specified
let streamOptions = {};
if (rangeHeader) {
const range = FileStreamingUtils.parseRangeHeader(rangeHeader, fileInfo.size);
if (range.isValid) {
streamOptions = { start: range.start, end: range.end };
response.status = 206; // Partial Content
response.setHeader('Content-Range',
FileStreamingUtils.formatContentRange(range.start, range.end, fileInfo.size)
);
}
}
// Set response headers
response.setHeader('Content-Type', FileStreamingUtils.getMimeType(filePath));
response.setHeader('Content-Length', streamOptions.end ?
(streamOptions.end - streamOptions.start + 1) : fileInfo.size);
response.setHeader('ETag', FileStreamingUtils.generateETag(fileInfo));
response.setHeader('Last-Modified', fileInfo.lastModified.toUTCString());
response.setHeader('Accept-Ranges', 'bytes');
// Create and pipe stream
const stream = await FileStreamingUtils.createReadStream(filePath, streamOptions);
stream.on('data', (chunk) => {
tracker.updateProgress(chunk.length);
const progress = tracker.getProgress();
console.log(`๐ Progress: ${progress.percentage}% (${progress.speed}/s)`);
});
stream.on('end', () => {
console.log(`โ
Transfer complete: ${filePath}`);
});
stream.pipe(response);
} catch (error) {
console.error(`โ Streaming error: ${error.message}`);
response.status = 500;
response.end('Internal Server Error');
}
}
}
```
### Batch File Operations
```javascript
const { LocalRemoteManager } = require('local-remote-file-manager');
class BatchFileProcessor {
constructor() {
this.fileManager = new LocalRemoteManager();
}
async backupDocuments(sourceDirectory, backupDirectory) {
// List all files
const files = await this.fileManager.listFiles(sourceDirectory, { recursive: true });
// Filter for documents
const documents = files.filter(file =>
/\.(pdf|doc|docx|txt|md)$/i.test(file.name)
);
console.log(`Found ${documents.length} documents to backup`);
// Batch compress all documents
const compressionResults = await this.fileManager.compressMultipleFiles(
documents.map(doc => doc.path),
backupDirectory,
{
format: 'zip',
compressionLevel: 6,
preserveStructure: true
}
);
// Generate temporary share URLs for all backups
const shareResults = await Promise.all(
compressionResults.successful.map(async (result) => {
return await this.fileManager.createShareableUrl(result.outputPath, {
expiresIn: '24h',
downloadLimit: 5
});
})
);
return {
processed: documents.length,
compressed: compressionResults.successful.length,
failed: compressionResults.failed.length,
shared: shareResults.length,
shareUrls: shareResults.map(r => r.shareableUrl)
};
}
async syncDirectories(sourceDir, targetDir) {
const sourceFiles = await this.fileManager.listFiles(sourceDir, { recursive: true });
const targetFiles = await this.fileManager.listFiles(targetDir, { recursive: true });
const results = {
copied: [],
updated: [],
errors: []
};
for (const file of sourceFiles) {
try {
const targetPath = file.path.replace(sourceDir, targetDir);
const targetExists = targetFiles.some(t => t.path === targetPath);
if (!targetExists) {
await this.fileManager.copyFile(file.path, targetPath);
results.copied.push(file.path);
} else {
const sourceInfo = await this.fileManager.getFileInfo(file.path);
const targetInfo = await this.fileManager.getFileInfo(targetPath);
if (sourceInfo.lastModified > targetInfo.lastModified) {
await this.fileManager.copyFile(file.path, targetPath);
results.updated.push(file.path);
}
}
} catch (error) {
results.errors.push({ file: file.path, error: error.message });
}
}
return results;
}
}
```
### ๐งช Testing
### Automated Testing
Run comprehensive tests for all features:
```bash
npm test # Interactive test selection
npm run test:all # Test all providers sequentially
npm run test:cli # Test CLI integration (NEW)
npm run test:local # Test local file operations
npm run test:watch # Test file watching
npm run test:compression # Test compression features
npm run test:tunnel # Test tunneling and sharing
npm run test:http # Test HTTP server provider
npm run test:streaming # Test enhanced file streaming
npm run test:s3 # Test S3-compatible object storage
npm run test:monitor # Test auto-shutdown & monitoring
```
### Test Coverage by Feature
#### Phase 1: Local File Operations (33/33 tests passing)
- โ
**Basic File Operations**: Upload, download, copy, move, rename, delete
- โ
**Folder Operations**: Create, list, delete, rename folders
- โ
**Path Management**: Absolute/relative paths, normalization, validation
- โ
**Search Operations**: File search by name, pattern matching, recursive search
- โ
**Error Handling**: Non-existent files, invalid paths, permission errors
#### Phase 2: File Watching (24/24 tests passing)
- โ
**Directory Watching**: Start/stop watching, recursive monitoring
- โ
**Event Filtering**: Add, change, delete events with custom filtering
- โ
**Performance Tests**: High-frequency events, batch event processing
- โ
**Edge Cases**: Non-existent paths, permission issues, invalid events
- โ
**Resource Management**: Watcher lifecycle, memory cleanup
#### Phase 3: Compression (30/30 tests passing)
- โ
**ZIP Operations**: Compression, decompression, multiple compression levels
- โ
**TAR.GZ Operations**: Archive creation, extraction, directory compression
- โ
**Format Detection**: Automatic format detection, cross-format operations
- โ
**Progress Tracking**: Real-time progress events, operation monitoring
- โ
**Batch Operations**: Multiple file compression, batch decompression
- โ
**Performance**: Large file handling, memory efficiency tests
#### Phase 4: Tunneling & File Sharing (35/35 tests passing)
- โ
**Tunnel Management**: Create, destroy tunnels with multiple services
- โ
**Temporary URLs**: URL generation, expiration, access control
- โ
**File Sharing**: Secure sharing, download tracking, permission management
- โ
**Service Integration**: ngrok, localtunnel, fallback mechanisms
- โ
**Security**: Access tokens, expiration handling, cleanup
#### Phase 5: HTTP Server Provider (22/22 tests passing)
- โ
**Server Lifecycle Management**: Create, start, stop HTTP servers
- โ
**Static File Serving**: MIME detection, range requests, security headers
- โ
**Route Registration**: Parameterized routes, middleware support
- โ
**Tunnel Integration**: Automatic tunnel creation with multiple services
- โ
**Server Monitoring**: Status tracking, metrics collection, health checks
#### Phase 6: Enhanced File Streaming (12/12 tests passing)
- โ
**Advanced Streaming**: Range-aware streams, progress tracking
- โ
**MIME Type Detection**: 40+ file types, automatic detection
- โ
**Range Request Processing**: Comprehensive range header parsing
- โ
**Progress Tracking**: Real-time progress, speed calculation
- โ
**Performance Optimization**: Memory efficiency, large file handling
#### Phase 7: S3-Compatible Object Storage (31/31 tests passing)
- โ
**S3 GET/HEAD Endpoints**: Object downloads and metadata queries
- โ
**Authentication System**: AWS-style, Bearer token, Basic auth
- โ
**Bucket/Key Mapping**: Path mapping with security validation
- โ
**S3-Compatible Headers**: ETag, Last-Modified, Content-Range
- โ
**Download Analytics**: Progress tracking, real-time monitoring
- โ
**Rate Limiting**: Credential management, access control
#### Phase 8: Auto-Shutdown & Monitoring (22/22 tests passing)
- โ
**Auto-Shutdown Triggers**: Completion, timeout, idle detection
- โ
**Real-Time Monitoring**: Dashboard, progress bars, status display
- โ
**Download Tracking**: Individual downloads, completion status
- โ
**Event Notifications**: Console, file, webhook notifications
- โ
**Expected Downloads**: Configuration, progress calculation
#### Phase 9: CLI Integration (16/16 tests passing)
- โ
**Command Validation**: Help commands, option parsing, error handling
- โ
**S3 Server Commands**: Server start with auth, bucket mapping, monitoring
- โ
**Server Management**: Status commands, stop commands, cleanup
- โ
**Configuration Validation**: Directory validation, port conflicts
- โ
**Error Handling**: Graceful errors, permission handling
### Test Results Summary
```
๐ Overall Test Results
=======================
โ
Total Tests: 225
โ
Passed: 225 (100%)
โ Failed: 0 (0%)
โญ Skipped: 0 (0%)
๐ฏ Success Rate: 100%
โก Performance Metrics
=====================
โฑ๏ธ Average Test Duration: 15ms
๐ Fastest Category: Local Operations (2ms avg)
๐ Slowest Category: CLI Integration (1000ms avg)
๐ Total Test Suite Time: ~5 minutes
๐ All features are production-ready!
```
### Manual Testing & Demos
Validate functionality with built-in demos:
```bash
npm run demo:cli # CLI integration demo (NEW)
npm run demo:basic # Basic file operations demo
npm run demo:watch # File watching demonstration
npm run demo:compression # Compression feature demo
npm run demo:tunnel # File sharing demo
npm run demo:s3 # S3 server demo (NEW)
npm run demo:monitor # Auto-shutdown & monitoring demo (NEW)
```
## ๐ API Reference
### LocalRemoteManager
#### Core File Operations
- `uploadFile(sourcePath, targetPath)` - Copy file to target location (alias for local copy)
- `downloadFile(remotePath, localPath)` - Download/copy file from remote location (alias for local copy)
- `getFileInfo(filePath)` - Get file metadata, size, timestamps, and permissions
- `listFiles(directoryPath, options)` - List files with recursive and filtering options
- `deleteFile(filePath)` - Delete a file with error handling
- `copyFile(sourcePath, destinationPath)` - Copy a file to new location
- `moveFile(sourcePath, destinationPath)` - Move a file to new location
- `renameFile(filePath, newName)` - Rename a file in same directory
- `searchFiles(pattern, options)` - Search for files by name pattern with recursive support
#### Folder Operations
- `createFolder(folderPath)` - Create a new folder with recursive support
- `listFolders(directoryPath)` - List only directories in a path
- `deleteFolder(folderPath, recursive)` - Delete a folder with optional recursive deletion
- `renameFolder(folderPath, newName)` - Rename a folder in same parent directory
- `copyFolder(sourcePath, destinationPath)` - Copy entire folder structure
- `moveFolder(sourcePath, destinationPath)` - Move entire folder structure
- `getFolderInfo(folderPath)` - Get folder metadata including item count and total size
#### File Watching
- `startWatching(path, options)` - Start monitoring file/directory changes
- Options: `recursive`, `events`, `ignoreDotfiles`, `debounceMs`
- `stopWatching(watchId | path)` - Stop a specific watcher by ID or path
- `stopAllWatching()` - Stop all active watchers with cleanup
- `listActiveWatchers()` - Get array of active watcher objects
- `getWatcherInfo(watchId)` - Get detailed watcher information including event count
- `getWatchingStatus()` - Get overall watching system status and statistics
#### Compression Operations
- `compressFile(inputPath, outputPath, options)` - Compress file or directory
- Options: `format` (zip, tar.gz), `level` (1-9), `includeRoot`
- `decompressFile(archivePath, outputDirectory, options)` - Extract archive contents
- Options: `format`, `overwrite`, `preservePermissions`
- `compressMultipleFiles(fileArray, outputDirectory, options)` - Batch compression with progress
- `decompressMultipleFiles(archiveArray, outputDirectory, options)` - Batch extraction
- `getCompressionStatus()` - Get compression system status and supported formats
- `getCompressionProvider()` - Access compression provider directly
#### Tunneling & File Sharing
- `createTunnel(options)` - Create new tunnel connection
- Options: `proto` (http, https), `subdomain`, `authToken`, `useExternalServer`, `localPort`
- `useExternalServer: true` - Forward tunnel to existing HTTP server instead of creating internal server
- `localPort: number` - Specify external server port to forward tunnel traffic to
- `destroyTunnel(tunnelId)` - Destroy specific tunnel connection
- `createTemporaryUrl(filePath, options)` - Generate temporary shareable URL
- Options: `permissions`, `expiresAt`, `downloadLimit`
- `revokeTemporaryUrl(urlId)` - Revoke access to shared URL
- `listActiveUrls()` - Get list of active temporary URLs
- `getTunnelStatus()` - Get tunneling system status including