UNPKG

@kadi.build/local-remote-file-manager-ability

Version:

Local & Remote File Management System with S3-compatible container registry, HTTP server provider, file streaming, and comprehensive testing suite

1,430 lines (1,171 loc) โ€ข 62.8 kB
# Local & Remote File Manager A comprehensive Node.js CLI tool and library for local file management with advanced features including **real-time file watching**, **compression/decompression**, **secure temporary file sharing via tunneling**, and **S3-compatible object storage**. This unified file management system provides powerful local operations with the capability to extend to remote server operations in future releases. [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Node.js](https://img.shields.io/badge/Node.js-%3E%3D16.0.0-green.svg)](https://nodejs.org/) [![Test Status](https://img.shields.io/badge/Tests-225%2F225%20Passing-brightgreen.svg)](#test-results) ## ๐ŸŒŸ Features - **๐Ÿ“ Complete Local File Management**: Full CRUD operations for files and folders with advanced path handling - **๐Ÿ‘๏ธ Real-time File Watching**: Monitor file and directory changes with event filtering and callbacks - **๐Ÿ—œ๏ธ Advanced Compression**: ZIP and TAR.GZ compression/decompression with progress tracking - **๐ŸŒ Secure File Sharing**: Temporary URL generation with tunnel-based sharing (ngrok/localtunnel integration) - **๐ŸŒ HTTP Server Provider**: Complete HTTP server management with static file serving and tunnel integration - **โšก Enhanced File Streaming**: Optimized file streaming with range requests and progress tracking - **๐Ÿ” S3-Compatible Object Storage**: Full S3 endpoints with authentication, bucket mapping, and analytics - **๐Ÿ“Š Real-Time Monitoring Dashboard**: Live progress tracking with visual dashboard and download analytics - **๐Ÿ”„ Auto-Shutdown Management**: Intelligent shutdown triggers based on download completion or timeout - **๐Ÿ“ข Event Notification System**: Comprehensive event system with console, file, and webhook notifications - **๐Ÿ–ฅ๏ธ Production-Ready CLI**: Complete command-line interface for all features with interactive help - **โšก High Performance**: Efficient memory usage, streaming for large files, and batch operations - **๐Ÿ› ๏ธ CLI & Library**: Use as command-line tool or integrate as Node.js library - **๐Ÿ”ง Robust Error Handling**: Comprehensive error handling and retry logic - **๐Ÿ“Š Progress Tracking**: Real-time progress for long-running operations - **๐ŸŽฏ Path Management**: Automatic folder creation and path normalization - **๐Ÿงช Comprehensive Testing**: Full test suite with 225/225 tests passing (100% success rate) ## ๐Ÿ“‹ Table of Contents - [Installation](#installation) - [Quick Start](#quick-start) - [Development Status](#development-status) - [CLI Usage](#cli-usage) - [Library Usage](#library-usage) - [Testing](#testing) - [API Reference](#api-reference) - [Performance](#performance) - [Contributing](#contributing) ## ๐Ÿš€ Installation ### As a CLI Tool ```bash git clone <repository-url> cd local-remote-file-manager-ability npm install npm run setup ``` ### Global Installation ```bash npm install -g local-remote-file-manager ``` ### As a Node.js Library ```bash npm install local-remote-file-manager ``` ```javascript const { createManager, compressFile } = require('local-remote-file-manager'); // Quick start - factory functions const manager = await createManager(); const files = await manager.getProvider('local').list('./'); // Quick compression await compressFile('./my-folder', './archive.zip'); ``` ๐Ÿ“– **See [USAGE.md](./USAGE.md) for complete examples and [INTEGRATION-EXAMPLE.md](./INTEGRATION-EXAMPLE.md) for real-world integration patterns.** ## โšก Quick Start ### CLI Quick Start 1. **Install dependencies** ```bash npm install ``` 2. **Test your setup** ```bash npm test # or test specific features npm run test:cli npm run test:local npm run test:s3 ``` 3. **Basic file operations** ```bash node index.js copy --source document.pdf --target ./uploads/document.pdf node index.js upload --file data.zip --target ./uploads/ node index.js list --directory ./uploads ``` 4. **Start S3-compatible file server** ```bash node index.js serve-s3 ./files --port 5000 --auth # Access at http://localhost:5000/default/<filename> ``` 5. **S3 server with bucket mapping** ```bash node index.js serve-s3 ./storage \ --bucket containers:./storage/containers \ --bucket images:./storage/images \ --port 9000 --auth --tunnel ``` 6. **File server with auto-shutdown** ```bash node index.js serve-s3 ./content --port 8000 \ --auto-shutdown --shutdown-delay 30000 ``` 7. **Start watching a directory** ```bash node index.js watch ./documents ``` 8. **Compress files** ```bash node index.js compress --file ./large-file.txt --output ./compressed.zip ``` 9. **Share a file temporarily** ```bash node index.js share ./document.pdf --expires 30m ``` ## ๐Ÿ“Š Development Status ### โœ… Completed Phases (Production Ready) | Phase | Status | Features | Test Results | |--------|--------|----------|--------------| | **Phase 1: Foundation & Local CRUD** | โœ… Complete | File/folder CRUD, path management, search operations | 33/33 tests passing (100%) | | **Phase 2: File/Directory Watching** | โœ… Complete | Real-time monitoring, event filtering, recursive watching | 24/24 tests passing (100%) | | **Phase 3: Compression/Decompression** | โœ… Complete | ZIP/TAR.GZ support, batch operations, progress tracking | 30/30 tests passing (100%) | | **Phase 4: Tunneling & Temp URLs** | โœ… Complete | Secure file sharing, temporary URLs, multiple tunnel services | 35/35 tests passing (100%) | | **Phase 5: HTTP Server Provider** | โœ… Complete | HTTP server management, static file serving, tunnel integration | 22/22 tests passing (100%) | | **Phase 6: File Streaming Enhancement** | โœ… Complete | Enhanced streaming, range requests, MIME detection, progress tracking | 12/12 tests passing (100%) | | **Phase 7: S3 Object Storage** | โœ… Complete | S3-compatible endpoints, authentication, bucket/key mapping, analytics | 31/31 tests passing (100%) | | **Phase 8: Auto-Shutdown & Monitoring** | โœ… Complete | Real-time monitoring dashboard, auto-shutdown triggers, event notifications | 22/22 tests passing (100%) | | **Phase 9: CLI Integration** | โœ… Complete | Complete CLI interface for all features, S3 server commands, validation | 16/16 tests passing (100%) | ### ๐ŸŽฏ Overall Project Health - **Total Tests**: 225/225 automated tests passing - **Pass Rate**: 100% across all implemented features - **Code Coverage**: Comprehensive test coverage for all providers - **Performance**: Optimized for large files and high-volume operations - **Stability**: Production-ready with full CLI integration ## ๐Ÿ’ป CLI Usage ### ๐Ÿ”ง System Commands **System information and validation:** ```bash node index.js --help # Show all available commands node index.js test # Test all providers node index.js test --provider local # Test specific provider node index.js validate # Validate configuration node index.js info # Show system information ``` ### ๐Ÿ“ File Operations **Basic file management:** ```bash # Upload/copy files node index.js upload --file document.pdf --target ./uploads/document.pdf node index.js copy --source ./file.pdf --target ./backup/file.pdf # Download files (local copy) node index.js download --source ./uploads/document.pdf --target ./downloads/ # Move and rename files node index.js move --source ./file.pdf --target ./archive/file.pdf node index.js rename --file ./old-name.pdf --name new-name.pdf # Delete files node index.js delete --file ./old-file.pdf --yes # List and search files node index.js list --directory ./uploads node index.js list --directory ./uploads --recursive node index.js search --query "*.pdf" --directory ./uploads ``` **Folder operations:** ```bash # Create and manage directories node index.js mkdir --directory ./new-folder node index.js ls-folders --directory ./uploads node index.js rmdir --directory ./old-folder --recursive --yes ``` ### ๐Ÿ‘๏ธ File Watching **Start and manage file watching:** ```bash # Start watching node index.js watch ./documents # Watch directory node index.js watch ./file.txt --no-recursive # Watch single file node index.js watch ./project --events add,change # Filter events # Manage watchers node index.js watch-list # List active watchers node index.js watch-list --verbose # Detailed watcher info node index.js watch-status # Show watching statistics node index.js watch-stop ./documents # Stop specific watcher node index.js watch-stop --all # Stop all watchers ``` ### ๐Ÿ—œ๏ธ Compression Operations **Compress and decompress files:** ```bash # Basic compression node index.js compress --file ./document.pdf --output ./compressed.zip node index.js compress --file ./folder --output ./archive.tar.gz --format tar.gz node index.js compress --file ./data --output ./backup.zip --level 9 # Decompression node index.js decompress --file ./archive.zip --directory ./extracted/ node index.js decompress --file ./backup.tar.gz --directory ./restored/ --overwrite # Batch operations node index.js compress-batch --directory ./files --output ./archives/ node index.js decompress-batch --directory ./archives --output ./extracted/ # Compression status node index.js compression-status ``` ### ๐ŸŒ File Sharing & Tunneling **Share files temporarily:** ```bash # Basic file sharing node index.js share ./document.pdf # Default 1h expiration node index.js share ./folder --expires 30m # 30 minutes node index.js share ./file.zip --expires 2h # 2 hours node index.js share ./project --multi-download # Allow multiple downloads # Advanced sharing options node index.js share ./data.zip --expires 24h --keep-alive --no-auto-shutdown # Tunnel management node index.js tunnel-status # Show active tunnels and URLs node index.js tunnel-cleanup # Clean up expired URLs and tunnels ``` ### ๐Ÿ” S3-Compatible File Server **Start S3 server (Core Feature):** ```bash # Basic S3 server node index.js serve-s3 ./storage --port 5000 node index.js serve-s3 ./storage --port 5000 --auth # With authentication # S3 server with bucket mapping node index.js serve-s3 ./storage \ --bucket containers:./storage/containers \ --bucket images:./storage/images \ --bucket docs:./storage/documents \ --port 9000 --auth # S3 server with tunnel (public access) node index.js serve-s3 ./content \ --port 8000 --tunnel --tunnel-service ngrok \ --name my-public-server # S3 server with monitoring node index.js serve-s3 ./data \ --port 7000 --monitor --interactive \ --name monitoring-server ``` **S3 server with auto-shutdown:** ```bash # Auto-shutdown after downloads node index.js serve-s3 ./container-storage \ --port 9000 --auto-shutdown \ --shutdown-delay 30000 --max-idle 600000 # Background server mode node index.js serve-s3 ./storage \ --port 5000 --background --name bg-server # Container registry example node index.js serve-s3 ./containers \ --bucket containers:./containers \ --bucket registry:./registry \ --port 9000 --auto-shutdown \ --name container-registry ``` **S3 server management:** ```bash # Server status and control node index.js server-status # Show all active servers node index.js server-status --json # JSON output node index.js server-stop --all # Stop all servers node index.js server-stop --name my-server # Stop specific server # Server cleanup node index.js server-cleanup # Clean up stopped servers ``` ### ๐Ÿ“Š Real-Time Monitoring **Monitor server activity:** ```bash # Real-time monitoring (when server started with --monitor) # Automatically displays: # - Active downloads with progress bars # - Server status and uptime # - Download completion status # - Auto-shutdown countdown # Interactive mode (when server started with --interactive) # Available commands in interactive mode: # - status: Show server status # - downloads: Show active downloads # - stop: Stop the server # - help: Show available commands ``` ### ๐Ÿš€ NPM Scripts for Development ```bash # Testing npm test # Run all tests npm run test:cli # Test CLI integration npm run test:local # Test local operations npm run test:watch # Test file watching npm run test:compression # Test compression npm run test:tunnel # Test tunneling npm run test:http # Test HTTP server npm run test:streaming # Test file streaming npm run test:s3 # Test S3 server npm run test:monitor # Test monitoring/auto-shutdown # Demos npm run demo:cli # CLI integration demo npm run demo:basic # Basic operations demo npm run demo:watch # File watching demo npm run demo:compression # Compression demo npm run demo:tunnel # File sharing demo npm run demo:container-registry # ๐Ÿณ Container registry demo (simple) npm run demo:container-registry-full # ๐Ÿณ Container registry demo (full) npm run demo:container-registry-test # ๐Ÿณ Test container registry components # Server shortcuts npm run serve-s3 # Start S3 server on port 5000 npm run server-status # Check server status npm run server-stop # Stop all servers # Cleanup npm run clean # Clean test files npm run clean:tests # Clean test results ``` ### ๐ŸŽฏ Common Use Cases **๐Ÿณ Container Registry Demo (Quick Start):** ```bash # Run the complete container registry demo npm run demo:container-registry # Or with real containers npm run demo:container-registry-full # Test the setup first npm run demo:container-registry-test ``` This demo showcases: - ๐Ÿณ **Container Export**: Exports Podman/Docker containers to registry format - ๐ŸŒ **Public Tunneling**: Creates accessible HTTPS URLs via ngrok - ๐Ÿ”’ **Secure Access**: Generates temporary AWS-style credentials - ๐Ÿ“Š **Real-time Monitoring**: Shows download progress and statistics - โšก **Auto-shutdown**: Automatically cleans up when downloads complete See [Container Registry Demo](./demos/container-registry-demo/README.md) for complete documentation. **Container Registry Setup:** ```bash # Set up S3-compatible container registry node index.js serve-s3 ./container-storage \ --bucket containers:./container-storage/containers \ --bucket registry:./container-storage/registry \ --port 9000 --auto-shutdown \ --name container-registry # Access containers at: # http://localhost:9000/containers/manifest.json # http://localhost:9000/containers/config.json # http://localhost:9000/containers/layer1.tar ``` **Public File Sharing:** ```bash # Share files with public tunnel node index.js serve-s3 ./public-files \ --port 8000 --tunnel --tunnel-service ngrok \ --bucket files:./public-files \ --name public-share # Or temporary file sharing node index.js share ./important-file.zip \ --expires 24h --multi-download ``` **Development File Server:** ```bash # Development server with monitoring node index.js serve-s3 ./dev-content \ --port 3000 --monitor --interactive \ --bucket assets:./dev-content/assets \ --bucket uploads:./dev-content/uploads ``` **Automated Backup System:** ```bash # Watch and compress new files node index.js watch ./documents & # In another terminal, set up S3 server for backup access node index.js serve-s3 ./backups \ --bucket daily:./backups/daily \ --bucket weekly:./backups/weekly \ --port 9090 --auth ``` ## ๐Ÿ“š Library Usage ### Installation as Node.js Module ```bash npm install local-remote-file-manager ``` ### Quick Start - Factory Functions The library provides convenient factory functions for quick setup: ```javascript const { createManager, createS3Server, compressFile, watchDirectory } = require('local-remote-file-manager'); // Quick file operations async function quickStart() { // Create a file manager with default config const manager = await createManager(); // Get providers for different operations const local = manager.getProvider('local'); const files = await local.list('./my-directory'); // Quick compression await compressFile('./my-folder', './archive.zip'); // Start file watching const watcher = await watchDirectory('./watched-folder'); watcher.on('change', (data) => { console.log('File changed:', data.path); }); } ``` ### Basic Integration ```javascript const { LocalRemoteManager, ConfigManager } = require('local-remote-file-manager'); class FileManagementApp { constructor() { this.config = new ConfigManager(); this.fileManager = null; } async initialize() { await this.config.load(); this.fileManager = new LocalRemoteManager(this.config); // Set up event handling this.fileManager.on('fileEvent', (data) => { console.log('File event:', data.type, data.path); }); } async processFile(inputPath, outputPath) { const local = this.fileManager.getProvider('local'); const compression = this.fileManager.getCompressionProvider(); // Copy file await local.copy(inputPath, outputPath); // Compress file const result = await compression.compress( outputPath, outputPath.replace(/\.[^/.]+$/, '.zip') ); return result; } } ``` ### S3-Compatible Server ```javascript const { createS3Server } = require('local-remote-file-manager'); async function createFileServer() { const server = createS3Server({ port: 5000, rootDirectory: './storage', bucketMapping: new Map([ ['public', './public-files'], ['private', './private-files'] ]), // Authentication authentication: { enabled: true, tempCredentials: true }, // Monitoring and auto-shutdown monitoring: { enabled: true, dashboard: true }, autoShutdown: { enabled: true, timeout: 3600000 // 1 hour } }); // Event handling server.on('request', (data) => { console.log(`${data.method} ${data.path}`); }); server.on('download', (data) => { console.log(`Downloaded: ${data.path} (${data.size} bytes)`); }); await server.start(); console.log('S3 server running on http://localhost:5000'); return server; } class ContainerRegistryServer { constructor() { this.server = null; } async startWithAutoShutdown() { // Create S3 server with auto-shutdown and monitoring this.server = new S3HttpServer({ port: 9000, serverName: 'container-registry', rootDirectory: './container-storage', // Auto-shutdown configuration enableAutoShutdown: true, shutdownOnCompletion: true, shutdownTriggers: ['completion', 'timeout', 'manual'], completionShutdownDelay: 30000, // 30 seconds after completion maxIdleTime: 600000, // 10 minutes idle maxTotalTime: 3600000, // 1 hour maximum // Real-time monitoring enableRealTimeMonitoring: true, enableDownloadTracking: true, monitoringUpdateInterval: 2000, // 2 seconds // Event notifications enableEventNotifications: true, notificationChannels: ['console', 'file'], // S3 configuration enableAuth: false, // Simplified for container usage bucketMapping: new Map([ ['containers', 'container-files'], ['registry', 'registry-data'] ]) }); // Setup event listeners this.setupEventListeners(); // Start the server const result = await this.server.start(); console.log(`๐Ÿš€ Container registry started: ${result.localUrl}`); // Configure expected downloads for auto-shutdown await this.configureExpectedDownloads(); return result; } setupEventListeners() { // Download progress tracking this.server.on('downloadStarted', (info) => { console.log(`๐Ÿ“ฅ Download started: ${info.bucket}/${info.key} (${this.formatBytes(info.fileSize)})`); }); this.server.on('downloadCompleted', (info) => { console.log(`โœ… Download completed: ${info.bucket}/${info.key} in ${info.duration}ms`); }); this.server.on('downloadFailed', (info) => { console.log(`โŒ Download failed: ${info.bucket}/${info.key} - ${info.error}`); }); // Auto-shutdown events this.server.on('allDownloadsComplete', (info) => { console.log(`๐ŸŽ‰ All downloads complete! Auto-shutdown will trigger in ${info.shutdownDelay / 1000}s`); }); this.server.on('shutdownScheduled', (info) => { console.log(`โฐ Server shutdown scheduled: ${info.reason} (${Math.round(info.delay / 1000)}s)`); }); this.server.on('shutdownWarning', (info) => { console.log(`โš ๏ธ Server shutting down in ${Math.round(info.timeRemaining / 1000)} seconds`); }); } async configureExpectedDownloads() { // Set expected container downloads const expectedDownloads = [ { bucket: 'containers', key: 'manifest.json', size: 1024 }, { bucket: 'containers', key: 'config.json', size: 512 }, { bucket: 'containers', key: 'layer-1.tar', size: 1048576 }, // 1MB { bucket: 'containers', key: 'layer-2.tar', size: 2097152 }, // 2MB ]; const result = this.server.setExpectedDownloads(expectedDownloads); if (result.success) { console.log(`๐Ÿ“‹ Configured ${result.expectedCount} expected downloads (${this.formatBytes(result.totalBytes)} total)`); } } formatBytes(bytes) { if (bytes === 0) return '0 B'; const k = 1024; const sizes = ['B', 'KB', 'MB', 'GB']; const i = Math.floor(Math.log(bytes) / Math.log(k)); return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + ' ' + sizes[i]; } async getMonitoringData() { // Get real-time monitoring data return { serverStatus: this.server.getStatus(), downloadStats: this.server.getDownloadStats(), dashboardData: this.server.getMonitoringData(), completionStatus: this.server.getDownloadCompletionStatus() }; } async gracefulShutdown() { console.log('๐Ÿ”„ Initiating graceful shutdown...'); await this.server.stop({ graceful: true, timeout: 30000 }); console.log('โœ… Server stopped gracefully'); } } // Usage example async function runContainerRegistry() { const registry = new ContainerRegistryServer(); try { // Start server with monitoring await registry.startWithAutoShutdown(); // Server will automatically shut down when all expected downloads complete // or after timeout periods are reached // Manual shutdown if needed process.on('SIGINT', async () => { await registry.gracefulShutdown(); process.exit(0); }); } catch (error) { console.error('Failed to start container registry:', error); } } ``` ### Advanced File Watching ```javascript const { LocalRemoteManager } = require('local-remote-file-manager'); class DocumentWatcher { constructor() { this.fileManager = new LocalRemoteManager(); this.setupEventHandlers(); } setupEventHandlers() { this.fileManager.on('fileAdded', (event) => { console.log(`New file detected: ${event.filePath}`); this.processNewFile(event.filePath); }); this.fileManager.on('fileChanged', (event) => { console.log(`File modified: ${event.filePath}`); this.handleFileChange(event.filePath); }); this.fileManager.on('fileRemoved', (event) => { console.log(`File deleted: ${event.filePath}`); this.handleFileRemoval(event.filePath); }); } async startWatching(directory) { const watchResult = await this.fileManager.startWatching(directory, { recursive: true, events: ['add', 'change', 'unlink'], ignoreDotfiles: true }); console.log(`Started watching: ${watchResult.watchId}`); return watchResult; } async processNewFile(filePath) { try { // Auto-compress large files const fileInfo = await this.fileManager.getFileInfo(filePath); if (fileInfo.size > 10 * 1024 * 1024) { // 10MB const compressedPath = filePath + '.zip'; await this.fileManager.compressFile(filePath, compressedPath); console.log(`Auto-compressed large file: ${compressedPath}`); } } catch (error) { console.error(`Failed to process new file: ${error.message}`); } } } ``` ### HTTP Server with Tunnel Integration ```javascript const { LocalRemoteManager } = require('local-remote-file-manager'); class FileServerApp { constructor() { this.fileManager = new LocalRemoteManager(); } async startTunneledServer(contentDirectory) { // Create HTTP server with automatic tunnel const serverInfo = await this.fileManager.createTunneledServer({ port: 3000, rootDirectory: contentDirectory, tunnelService: 'serveo' // or 'ngrok' }); console.log(`๐ŸŒ Local server: http://localhost:${serverInfo.port}`); console.log(`๐Ÿ”— Public URL: ${serverInfo.tunnelUrl}`); return serverInfo; } async addCustomRoutes(serverId) { // Add high-priority custom routes that override generic patterns await this.fileManager.addCustomRoute( serverId, 'GET', '/api/status', (req, res) => { res.json({ status: 'active', timestamp: new Date().toISOString() }); }, { priority: 100 } // High priority overrides generic /:bucket/* routes ); // Add API endpoint with medium priority await this.fileManager.addCustomRoute( serverId, 'GET', '/api/:version/health', (req, res) => { res.json({ health: 'ok', version: req.params.version }); }, { priority: 50 } ); // Lower priority route (will be handled after higher priority routes) await this.fileManager.addCustomRoute( serverId, 'GET', '/docs/:page', (req, res) => { res.send(`Documentation page: ${req.params.page}`); }, { priority: 10 } ); console.log('โœ… Custom routes added with priority-based routing'); } async serveFiles() { const server = await this.startTunneledServer('./public-files'); // Add custom routes with priority system await this.addCustomRoutes(server.serverId); // Monitor server status setInterval(async () => { const status = await this.fileManager.getServerStatus(server.serverId); console.log(`๐Ÿ“Š Server status: ${status.status}, Requests: ${status.requestCount}`); }, 30000); return server; } } ``` ### S3-Compatible Object Storage ```javascript const { S3HttpServer } = require('local-remote-file-manager/src/s3Server'); class S3ObjectStorage { constructor() { this.s3Server = new S3HttpServer({ port: 9000, serverName: 'my-s3-server', rootDirectory: './s3-storage', enableAuth: true, bucketMapping: new Map([ ['documents', 'user-docs'], ['images', 'media/images'], ['backups', 'backup-storage'] ]), bucketAccessControl: new Map([ ['documents', { read: true, write: true }], ['images', { read: true, write: false }], ['backups', { read: true, write: true }] ]) }); } async start() { const serverInfo = await this.s3Server.start(); console.log(`๐Ÿ—„๏ธ S3 Server running on port ${serverInfo.port}`); // Generate temporary credentials const credentials = this.s3Server.generateTemporaryCredentials({ permissions: ['read', 'write'], buckets: ['documents', 'backups'], expiryMinutes: 60 }); console.log(`๐Ÿ”‘ Access Key: ${credentials.accessKey}`); console.log(`๐Ÿ” Secret Key: ${credentials.secretKey}`); return { serverInfo, credentials }; } async enableMonitoring() { // Start real-time monitoring dashboard this.s3Server.startMonitoringDashboard({ updateInterval: 2000, showServerStats: true, showDownloadProgress: true, showActiveDownloads: true, showShutdownStatus: true }); // Setup download analytics this.s3Server.on('downloadCompleted', (info) => { console.log(`๐Ÿ“Š Download analytics: ${info.key} (${info.bytes} bytes in ${info.duration}ms)`); // Get real-time dashboard data const dashboardData = this.s3Server.getMonitoringData(); console.log(`๐Ÿ“ˆ Total downloads: ${dashboardData.downloadStats.totalDownloads}`); console.log(`โšก Average speed: ${this.formatSpeed(dashboardData.downloadStats.averageSpeed)}`); }); return true; } formatSpeed(bytesPerSecond) { if (bytesPerSecond < 1024) return `${bytesPerSecond} B/s`; if (bytesPerSecond < 1024 * 1024) return `${(bytesPerSecond / 1024).toFixed(1)} KB/s`; return `${(bytesPerSecond / (1024 * 1024)).toFixed(1)} MB/s`; } } // CLI Equivalent Commands: // Instead of complex library setup, use simple CLI commands: // Start S3 server with authentication and bucket mapping // node index.js serve-s3 ./s3-storage --port 9000 --auth \ // --bucket documents:./s3-storage/user-docs \ // --bucket images:./s3-storage/media/images \ // --bucket backups:./s3-storage/backup-storage \ // --monitor --name my-s3-server // S3 server with auto-shutdown for container registry // node index.js serve-s3 ./containers --port 9000 \ // --bucket containers:./containers \ // --auto-shutdown --monitor --name container-registry // S3 server with tunnel for public access // node index.js serve-s3 ./public-files --port 8000 \ // --tunnel --tunnel-service ngrok --bucket files:./public-files // Access via S3-compatible endpoints: // GET http://localhost:9000/documents/myfile.pdf // HEAD http://localhost:9000/images/photo.jpg ``` ### CLI Integration Examples The CLI provides direct access to all library features with simple commands: ```javascript // Library approach (complex setup): const server = new S3HttpServer({ enableAutoShutdown: true, shutdownTriggers: ['completion'], completionShutdownDelay: 30000, enableRealTimeMonitoring: true }); await server.start(); // CLI approach (simple command): // node index.js serve-s3 ./storage --auto-shutdown --shutdown-delay 30000 --monitor // Multiple operations with library require coordination: // 1. Set up file watcher // 2. Set up compression handler // 3. Set up S3 server // 4. Coordinate between them // CLI approach - each command handles coordination: // Terminal 1: node index.js watch ./documents // Terminal 2: node index.js serve-s3 ./storage --port 9000 --monitor // Terminal 3: node index.js compress-batch --directory ./documents --output ./archives/ ``` ### Container Registry Use Case ```bash # Complete container registry setup with CLI: mkdir -p ./container-storage/containers ./container-storage/registry # Start S3-compatible container registry node index.js serve-s3 ./container-storage \ --bucket containers:./container-storage/containers \ --bucket registry:./container-storage/registry \ --port 9000 --auto-shutdown --monitor \ --name container-registry # Server automatically shuts down after container downloads complete # Real-time monitoring shows download progress and completion status # Access containers at: http://localhost:9000/containers/<filename> ``` ``` ### Real-Time Monitoring Dashboard ```javascript const { MonitoringDashboard, DownloadMonitor } = require('local-remote-file-manager'); class LiveMonitoringSystem { constructor() { this.dashboard = new MonitoringDashboard({ updateInterval: 1000, showServerStats: true, showDownloadProgress: true, showActiveDownloads: true, showShutdownStatus: true }); this.downloadMonitor = new DownloadMonitor({ trackPartialDownloads: true, progressUpdateInterval: 1000 }); } async startMonitoring(s3Server) { // Connect monitoring to S3 server this.dashboard.connectToServer(s3Server); this.downloadMonitor.connectToServer(s3Server); // Start real-time dashboard this.dashboard.start(); // Setup download tracking this.downloadMonitor.on('downloadStarted', (info) => { this.dashboard.addActiveDownload(info); }); this.downloadMonitor.on('downloadProgress', (info) => { this.dashboard.updateDownloadProgress(info.downloadId, info); }); this.downloadMonitor.on('downloadCompleted', (info) => { this.dashboard.completeDownload(info.downloadId, info); }); // Example dashboard output: /* +------------------------------------------------------------------------------------------------------------------------------+ | S3 Object Storage Server | +------------------------------------------------------------------------------------------------------------------------------+ |Status: RUNNING Uptime: 45s| |Port: 9000 Public URL: http://localhost:9000| +------------------------------------------------------------------------------------------------------------------------------+ |Downloads Progress | |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ 3/5 (60%) | |Active Downloads: 2 Completed: 3 Failed: 0| |Speed: 1.2 MB/s Total: 2.1 MB | +------------------------------------------------------------------------------------------------------------------------------+ |Active Downloads | |โ–ถ layer-2.tar (1.2 MB) โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘โ–‘โ–‘โ–‘ 80% @ 450 KB/s | |โ–ถ layer-3.tar (512 KB) โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘ 45% @ 230 KB/s | +------------------------------------------------------------------------------------------------------------------------------+ |Auto-Shutdown: ON Trigger: Completion + 30s| |Next Check: 00:00:05 Status: Monitoring| +------------------------------------------------------------------------------------------------------------------------------+ */ console.log('๐Ÿ“Š Real-time monitoring dashboard started'); return true; } async stopMonitoring() { this.dashboard.stop(); this.downloadMonitor.stop(); console.log('๐Ÿ“Š Monitoring stopped'); } async getAnalytics() { return { dashboard: this.dashboard.getCurrentData(), downloads: this.downloadMonitor.getStatistics(), performance: this.downloadMonitor.getPerformanceMetrics() }; } } ``` } async enableMonitoring() { // Start real-time monitoring dashboard this.s3Server.startRealTimeMonitoring({ interval: 1000, enableConsole: true }); // Track download events this.s3Server.on('download:started', (info) => { console.log(`๐Ÿ“ฅ Download started: ${info.bucket}/${info.key}`); }); this.s3Server.on('download:completed', (info) => { console.log(`โœ… Download completed: ${info.bucket}/${info.key} (${info.size} bytes)`); }); } async getAnalytics() { const analytics = this.s3Server.generateDownloadAnalytics({ includeDetails: true }); console.log(`๐Ÿ“Š Total Downloads: ${analytics.summary.totalDownloads}`); console.log(`๐Ÿš€ Average Speed: ${analytics.performance.averageSpeed}`); console.log(`โฑ๏ธ Server Uptime: ${analytics.performance.uptime}s`); return analytics; } } // Usage const storage = new S3ObjectStorage(); await storage.start(); await storage.enableMonitoring(); // Access via S3-compatible endpoints: // GET http://localhost:9000/documents/myfile.pdf // HEAD http://localhost:9000/images/photo.jpg ``` ### Enhanced File Streaming ```javascript const { FileStreamingUtils, DownloadTracker } = require('local-remote-file-manager'); class StreamingFileServer { async serveFileWithProgress(filePath, response, rangeHeader = null) { try { // Get file information const fileInfo = await FileStreamingUtils.getFileInfo(filePath); console.log(`๐Ÿ“„ Serving: ${fileInfo.name} (${fileInfo.size} bytes)`); // Create download tracker const tracker = new DownloadTracker(fileInfo.size); // Handle range request if specified let streamOptions = {}; if (rangeHeader) { const range = FileStreamingUtils.parseRangeHeader(rangeHeader, fileInfo.size); if (range.isValid) { streamOptions = { start: range.start, end: range.end }; response.status = 206; // Partial Content response.setHeader('Content-Range', FileStreamingUtils.formatContentRange(range.start, range.end, fileInfo.size) ); } } // Set response headers response.setHeader('Content-Type', FileStreamingUtils.getMimeType(filePath)); response.setHeader('Content-Length', streamOptions.end ? (streamOptions.end - streamOptions.start + 1) : fileInfo.size); response.setHeader('ETag', FileStreamingUtils.generateETag(fileInfo)); response.setHeader('Last-Modified', fileInfo.lastModified.toUTCString()); response.setHeader('Accept-Ranges', 'bytes'); // Create and pipe stream const stream = await FileStreamingUtils.createReadStream(filePath, streamOptions); stream.on('data', (chunk) => { tracker.updateProgress(chunk.length); const progress = tracker.getProgress(); console.log(`๐Ÿ“ˆ Progress: ${progress.percentage}% (${progress.speed}/s)`); }); stream.on('end', () => { console.log(`โœ… Transfer complete: ${filePath}`); }); stream.pipe(response); } catch (error) { console.error(`โŒ Streaming error: ${error.message}`); response.status = 500; response.end('Internal Server Error'); } } } ``` ### Batch File Operations ```javascript const { LocalRemoteManager } = require('local-remote-file-manager'); class BatchFileProcessor { constructor() { this.fileManager = new LocalRemoteManager(); } async backupDocuments(sourceDirectory, backupDirectory) { // List all files const files = await this.fileManager.listFiles(sourceDirectory, { recursive: true }); // Filter for documents const documents = files.filter(file => /\.(pdf|doc|docx|txt|md)$/i.test(file.name) ); console.log(`Found ${documents.length} documents to backup`); // Batch compress all documents const compressionResults = await this.fileManager.compressMultipleFiles( documents.map(doc => doc.path), backupDirectory, { format: 'zip', compressionLevel: 6, preserveStructure: true } ); // Generate temporary share URLs for all backups const shareResults = await Promise.all( compressionResults.successful.map(async (result) => { return await this.fileManager.createShareableUrl(result.outputPath, { expiresIn: '24h', downloadLimit: 5 }); }) ); return { processed: documents.length, compressed: compressionResults.successful.length, failed: compressionResults.failed.length, shared: shareResults.length, shareUrls: shareResults.map(r => r.shareableUrl) }; } async syncDirectories(sourceDir, targetDir) { const sourceFiles = await this.fileManager.listFiles(sourceDir, { recursive: true }); const targetFiles = await this.fileManager.listFiles(targetDir, { recursive: true }); const results = { copied: [], updated: [], errors: [] }; for (const file of sourceFiles) { try { const targetPath = file.path.replace(sourceDir, targetDir); const targetExists = targetFiles.some(t => t.path === targetPath); if (!targetExists) { await this.fileManager.copyFile(file.path, targetPath); results.copied.push(file.path); } else { const sourceInfo = await this.fileManager.getFileInfo(file.path); const targetInfo = await this.fileManager.getFileInfo(targetPath); if (sourceInfo.lastModified > targetInfo.lastModified) { await this.fileManager.copyFile(file.path, targetPath); results.updated.push(file.path); } } } catch (error) { results.errors.push({ file: file.path, error: error.message }); } } return results; } } ``` ### ๐Ÿงช Testing ### Automated Testing Run comprehensive tests for all features: ```bash npm test # Interactive test selection npm run test:all # Test all providers sequentially npm run test:cli # Test CLI integration (NEW) npm run test:local # Test local file operations npm run test:watch # Test file watching npm run test:compression # Test compression features npm run test:tunnel # Test tunneling and sharing npm run test:http # Test HTTP server provider npm run test:streaming # Test enhanced file streaming npm run test:s3 # Test S3-compatible object storage npm run test:monitor # Test auto-shutdown & monitoring ``` ### Test Coverage by Feature #### Phase 1: Local File Operations (33/33 tests passing) - โœ… **Basic File Operations**: Upload, download, copy, move, rename, delete - โœ… **Folder Operations**: Create, list, delete, rename folders - โœ… **Path Management**: Absolute/relative paths, normalization, validation - โœ… **Search Operations**: File search by name, pattern matching, recursive search - โœ… **Error Handling**: Non-existent files, invalid paths, permission errors #### Phase 2: File Watching (24/24 tests passing) - โœ… **Directory Watching**: Start/stop watching, recursive monitoring - โœ… **Event Filtering**: Add, change, delete events with custom filtering - โœ… **Performance Tests**: High-frequency events, batch event processing - โœ… **Edge Cases**: Non-existent paths, permission issues, invalid events - โœ… **Resource Management**: Watcher lifecycle, memory cleanup #### Phase 3: Compression (30/30 tests passing) - โœ… **ZIP Operations**: Compression, decompression, multiple compression levels - โœ… **TAR.GZ Operations**: Archive creation, extraction, directory compression - โœ… **Format Detection**: Automatic format detection, cross-format operations - โœ… **Progress Tracking**: Real-time progress events, operation monitoring - โœ… **Batch Operations**: Multiple file compression, batch decompression - โœ… **Performance**: Large file handling, memory efficiency tests #### Phase 4: Tunneling & File Sharing (35/35 tests passing) - โœ… **Tunnel Management**: Create, destroy tunnels with multiple services - โœ… **Temporary URLs**: URL generation, expiration, access control - โœ… **File Sharing**: Secure sharing, download tracking, permission management - โœ… **Service Integration**: ngrok, localtunnel, fallback mechanisms - โœ… **Security**: Access tokens, expiration handling, cleanup #### Phase 5: HTTP Server Provider (22/22 tests passing) - โœ… **Server Lifecycle Management**: Create, start, stop HTTP servers - โœ… **Static File Serving**: MIME detection, range requests, security headers - โœ… **Route Registration**: Parameterized routes, middleware support - โœ… **Tunnel Integration**: Automatic tunnel creation with multiple services - โœ… **Server Monitoring**: Status tracking, metrics collection, health checks #### Phase 6: Enhanced File Streaming (12/12 tests passing) - โœ… **Advanced Streaming**: Range-aware streams, progress tracking - โœ… **MIME Type Detection**: 40+ file types, automatic detection - โœ… **Range Request Processing**: Comprehensive range header parsing - โœ… **Progress Tracking**: Real-time progress, speed calculation - โœ… **Performance Optimization**: Memory efficiency, large file handling #### Phase 7: S3-Compatible Object Storage (31/31 tests passing) - โœ… **S3 GET/HEAD Endpoints**: Object downloads and metadata queries - โœ… **Authentication System**: AWS-style, Bearer token, Basic auth - โœ… **Bucket/Key Mapping**: Path mapping with security validation - โœ… **S3-Compatible Headers**: ETag, Last-Modified, Content-Range - โœ… **Download Analytics**: Progress tracking, real-time monitoring - โœ… **Rate Limiting**: Credential management, access control #### Phase 8: Auto-Shutdown & Monitoring (22/22 tests passing) - โœ… **Auto-Shutdown Triggers**: Completion, timeout, idle detection - โœ… **Real-Time Monitoring**: Dashboard, progress bars, status display - โœ… **Download Tracking**: Individual downloads, completion status - โœ… **Event Notifications**: Console, file, webhook notifications - โœ… **Expected Downloads**: Configuration, progress calculation #### Phase 9: CLI Integration (16/16 tests passing) - โœ… **Command Validation**: Help commands, option parsing, error handling - โœ… **S3 Server Commands**: Server start with auth, bucket mapping, monitoring - โœ… **Server Management**: Status commands, stop commands, cleanup - โœ… **Configuration Validation**: Directory validation, port conflicts - โœ… **Error Handling**: Graceful errors, permission handling ### Test Results Summary ``` ๐Ÿ“Š Overall Test Results ======================= โœ… Total Tests: 225 โœ… Passed: 225 (100%) โŒ Failed: 0 (0%) โญ Skipped: 0 (0%) ๐ŸŽฏ Success Rate: 100% โšก Performance Metrics ===================== โฑ๏ธ Average Test Duration: 15ms ๐Ÿƒ Fastest Category: Local Operations (2ms avg) ๐ŸŒ Slowest Category: CLI Integration (1000ms avg) ๐Ÿ•’ Total Test Suite Time: ~5 minutes ๐ŸŽ‰ All features are production-ready! ``` ### Manual Testing & Demos Validate functionality with built-in demos: ```bash npm run demo:cli # CLI integration demo (NEW) npm run demo:basic # Basic file operations demo npm run demo:watch # File watching demonstration npm run demo:compression # Compression feature demo npm run demo:tunnel # File sharing demo npm run demo:s3 # S3 server demo (NEW) npm run demo:monitor # Auto-shutdown & monitoring demo (NEW) ``` ## ๐Ÿ“– API Reference ### LocalRemoteManager #### Core File Operations - `uploadFile(sourcePath, targetPath)` - Copy file to target location (alias for local copy) - `downloadFile(remotePath, localPath)` - Download/copy file from remote location (alias for local copy) - `getFileInfo(filePath)` - Get file metadata, size, timestamps, and permissions - `listFiles(directoryPath, options)` - List files with recursive and filtering options - `deleteFile(filePath)` - Delete a file with error handling - `copyFile(sourcePath, destinationPath)` - Copy a file to new location - `moveFile(sourcePath, destinationPath)` - Move a file to new location - `renameFile(filePath, newName)` - Rename a file in same directory - `searchFiles(pattern, options)` - Search for files by name pattern with recursive support #### Folder Operations - `createFolder(folderPath)` - Create a new folder with recursive support - `listFolders(directoryPath)` - List only directories in a path - `deleteFolder(folderPath, recursive)` - Delete a folder with optional recursive deletion - `renameFolder(folderPath, newName)` - Rename a folder in same parent directory - `copyFolder(sourcePath, destinationPath)` - Copy entire folder structure - `moveFolder(sourcePath, destinationPath)` - Move entire folder structure - `getFolderInfo(folderPath)` - Get folder metadata including item count and total size #### File Watching - `startWatching(path, options)` - Start monitoring file/directory changes - Options: `recursive`, `events`, `ignoreDotfiles`, `debounceMs` - `stopWatching(watchId | path)` - Stop a specific watcher by ID or path - `stopAllWatching()` - Stop all active watchers with cleanup - `listActiveWatchers()` - Get array of active watcher objects - `getWatcherInfo(watchId)` - Get detailed watcher information including event count - `getWatchingStatus()` - Get overall watching system status and statistics #### Compression Operations - `compressFile(inputPath, outputPath, options)` - Compress file or directory - Options: `format` (zip, tar.gz), `level` (1-9), `includeRoot` - `decompressFile(archivePath, outputDirectory, options)` - Extract archive contents - Options: `format`, `overwrite`, `preservePermissions` - `compressMultipleFiles(fileArray, outputDirectory, options)` - Batch compression with progress - `decompressMultipleFiles(archiveArray, outputDirectory, options)` - Batch extraction - `getCompressionStatus()` - Get compression system status and supported formats - `getCompressionProvider()` - Access compression provider directly #### Tunneling & File Sharing - `createTunnel(options)` - Create new tunnel connection - Options: `proto` (http, https), `subdomain`, `authToken`, `useExternalServer`, `localPort` - `useExternalServer: true` - Forward tunnel to existing HTTP server instead of creating internal server - `localPort: number` - Specify external server port to forward tunnel traffic to - `destroyTunnel(tunnelId)` - Destroy specific tunnel connection - `createTemporaryUrl(filePath, options)` - Generate temporary shareable URL - Options: `permissions`, `expiresAt`, `downloadLimit` - `revokeTemporaryUrl(urlId)` - Revoke access to shared URL - `listActiveUrls()` - Get list of active temporary URLs - `getTunnelStatus()` - Get tunneling system status including