UNPKG

autoaiload

Version:

The ultimate, future-ready CLI for smart load testing. Built with zero external dependencies, it offers multi-platform support, real-time accurate results, and powerful terminal styling. Features include configurable test stages, detailed network phase la

981 lines (850 loc) 116 kB
#!/usr/bin/env node // === SMART LOAD TEST V14.7 - ULTIMATE FUTURE-READY (Redirects Not Errors) === // ✅ FIX: Redirects (3xx status codes) are no longer counted as "errors" in the overall "Error Rate" and "Total Failed Requests" for the final report. // ✅ FIX: Live latency (Lat) in progress bar now shows actual latency for ALL HTTP responses (not just 2xx). // ✅ FIX: Live Status Heatmap now includes a dedicated 3xx (Redirect) category for clearer visualization. // ✅ FIX: Improved live update accuracy for success/error rates and average latency. // ✅ FIX: Enhanced terminal clearing and cursor positioning for stable live progress bar display. // ✅ NEW: Enhanced Progress Bar: Displays real-time RPS, Latency, and Error Rate directly within the progress bar line. // ✅ NEW: Graphs with Live Value Display: ASCII graphs now show the current metric value/percentage next to their labels. // ✅ NEW: Graph Point Character Customization: User can choose the character used for plotting points on ASCII graphs. // ✅ NEW: Real-time Total Data Received Graph: Live ASCII graph showing data transfer rate. // ✅ NEW: Request Latency Breakdown by HTTP Method: Detailed latency metrics for each HTTP method used. // ✅ NEW: Configurable Test Stages: Define multiple stages (ramp-up, steady, ramp-down) with varying RPS, duration, and concurrent users. // ✅ NEW: Detailed Network Phase Latencies: Capture and report DNS lookup, TCP connect, and TLS handshake times for each request. // ✅ NEW: Real-time Average Latency Graph: Live ASCII graph for average latency during the test. // ✅ NEW: Configurable Log Level: Control the verbosity of terminal output. // ✅ NEW: Customizable Progress Bar Characters: Personalize the progress bar's appearance. // ✅ NEW: Dynamic ASCII Chart Scaling: Graphs automatically adjust to data ranges for better visualization. // ✅ NEW: Enhanced Error Type Categorization: More granular breakdown of network and HTTP errors. // ✅ NEW: Response Time Percentile Breakdown by Status Code: Detailed latency percentiles for each successful status code. // ✅ NEW: Intelligent Warning & Recommendation System: Provides actionable insights based on test results. // ✅ Enhanced and more sophisticated ANSI color scheme for terminal output. // ✅ FIX: Ensures the final report is generated and printed only once to the terminal. // ✅ All text (comments, prompts, output) is now in English. // ✅ Real-time Status Code Heatmap: ASCII-styled heatmap for 2xx, 4xx, 5xx frequencies. // ✅ Live Terminal Graphs (ASCII charts): In-house ASCII line charts for success rate, error rate, and CPU usage. // ✅ Refined JSON Export: Saves ONLY the aggregated test results, not individual request data. // ✅ Enhanced Live Progress Bar: Comprehensive real-time display including success, errors, CPU, and memory. // ✅ Built-in Modules Only: No external dependencies, ensuring broad compatibility. // ✅ Advanced Styling: Next-level console styling with ANSI escape codes for better readability and impact. // ✅ ES Module Compatibility: Converted all 'require' statements to 'import' for .mjs files. // ✅ Improved HTTP Error Reporting: Clearer categorization of 4xx and 5xx errors in breakdown. // ✅ Latency for Failed Requests: Basic latency metrics for failed requests (network and HTTP errors). // ✅ Default HTTP Method: GET is now the default if no method is specified. // ✅ Request Latency Percentile Table: Presents latency percentiles in a clean, readable table. // ✅ Error Rate by Type: Shows percentage of total requests for each error type. // ✅ More Detailed System Metrics: Added CPU model and memory usage percentage. // ✅ Improved Input Experience: Clears line after each input for a cleaner terminal. // ✅ Custom HTTP Request Headers: User can input custom headers (e.g., Authorization, Cookies). // ✅ Response Header Analysis: Report includes distribution of Content-Type and Server headers. // ✅ High-Level Summary Table: Quick overview of key metrics at the top of the report. // ✅ Request Body Support: Optional JSON request body for methods like POST, PUT, PATCH. // ✅ CPU Usage Fix: Robust handling for os.cpus() returning empty/zero data, displaying N/A gracefully. // ✅ RPS Input Refinement: Removed "0 for unlimited" to ensure predictable test runs. // ✅ Test Start/End Timestamps: Report now includes precise start and end times. // ✅ Latency Histogram: Visual representation of latency distribution. // ✅ Response Size Distribution: Breakdown of response sizes into buckets. // ✅ HTTP Method Distribution: Reports on the HTTP methods used. // ✅ Test Health Status: Comprehensive evaluation of test performance. // ✅ Multi-Platform Support: Explicitly designed and tested for Termux (Android), Windows, macOS, and Linux. // ES Module imports import { EventEmitter } from 'events'; // For setMaxListeners import readline from 'readline'; import http from 'http'; import https from 'https'; import { URL } from 'url'; import { performance } from 'perf_hooks'; import os from 'os'; import fs from 'fs'; // For file system operations (saving JSON) // Set max listeners for EventEmitter to prevent warnings in high concurrency scenarios EventEmitter.setMaxListeners(0); // --- ANSI Escape Codes for Styling --- const ANSI = { // Reset RESET: '\x1b[0m', // Text Styles BOLD: '\x1b[1m', DIM: '\x1b[2m', ITALIC: '\x1b[3m', UNDERLINE: '\x1b[4m', INVERSE: '\x1b[7m', HIDDEN: '\x1b[8m', STRIKETHROUGH: '\x1b[9m', // Foreground Colors (Standard) BLACK: '\x1b[30m', RED: '\x1b[31m', GREEN: '\x1b[32m', YELLOW: '\x1b[33m', BLUE: '\x1b[34m', MAGENTA: '\x1b[35m', CYAN: '\x1b[36m', WHITE: '\x1b[37m', // Foreground Colors (Bright/Light) BRIGHT_BLACK: '\x1b[90m', // Gray BRIGHT_RED: '\x1b[91m', BRIGHT_GREEN: '\x1b[92m', BRIGHT_YELLOW: '\x1b[93m', BRIGHT_BLUE: '\x1b[94m', BRIGHT_MAGENTA: '\x1b[95m', BRIGHT_CYAN: '\x1b[96m', BRIGHT_WHITE: '\x1b[97m', // Background Colors (Standard) BG_BLACK: '\x1b[40m', BG_RED: '\x1b[41m', BG_GREEN: '\x1b[42m', BG_YELLOW: '\x1b[43m', BG_BLUE: '\x1b[44m', BG_MAGENTA: '\x1b[45m', BG_CYAN: '\x1b[46m', BG_WHITE: '\x1b[47m', // Background Colors (Bright/Light) BG_BRIGHT_BLACK: '\x1b[100m', // Bright Gray BG_BRIGHT_RED: '\x1b[101m', BG_BRIGHT_GREEN: '\x1b[102m', BG_BRIGHT_YELLOW: '\x1b[103m', BG_BRIGHT_BLUE: '\x1b[104m', BG_BRIGHT_MAGENTA: '\x1b[105m', BG_BRIGHT_CYAN: '\x1b[106m', BG_BRIGHT_WHITE: '\x1b[107m', /** * Generates an ANSI escape code for 24-bit (True Color) foreground. * @param {string} hex The hex color string (e.g., '#RRGGBB'). * @returns {string} The ANSI escape code. */ hexFg: (hex) => { const r = parseInt(hex.slice(1, 3), 16); const g = parseInt(hex.slice(3, 5), 16); const b = parseInt(hex.slice(5, 7), 16); return `\x1b[38;2;${r};${g};${b}m`; }, /** * Generates an ANSI escape code for 24-bit (True Color) background. * @param {string} hex The hex color string (e.g., '#RRGGBB'). * @returns {string} The ANSI escape code. */ hexBg: (hex) => { const r = parseInt(hex.slice(1, 3), 16); const g = parseInt(hex.slice(3, 5), 16); const b = parseInt(hex.slice(5, 7), 16); return `\x1b[48;2;${r};${g};${b}m`; } }; // Helper function to apply styling const style = (text, ...styles) => { return styles.join('') + text + ANSI.RESET; }; // --- Global Variables --- // Readline interface for user input const rl = readline.createInterface({ input: process.stdin, output: process.stdout }); /** * Prompts the user for input using readline and then clears the current line. * @param {string} question The question string to display to the user. * @returns {Promise<string>} A Promise that resolves with the user's input. */ const askAndClear = async (question) => { const answer = await new Promise(resolve => rl.question(question, resolve)); readline.clearLine(process.stdout, 0); readline.cursorTo(process.stdout, 0); return answer; }; // Array to store results of each request // Stores { latency: number, statusCode: number | null, errorType: string | null, bytesReceived: number, bytesSent: number, method: string, responseHeaders: object, dnsLookupMs: number | null, tcpConnectMs: number | null, tlsHandshakeMs: number | null } const requestResults = []; // Counters for overall test statistics let totalRequestsSent = 0; let totalDataSent = 0; let totalDataReceived = 0; let activeRequests = 0; // Number of requests currently in flight let peakActiveRequests = 0; // Maximum number of concurrent requests observed // Counters for real-time heatmap and graphs let successCount = 0; let redirectCount = 0; // Counter for 3xx redirects let clientErrorCount = 0; // 4xx errors let serverErrorCount = 0; // 5xx errors let networkErrorCount = 0; // Non-HTTP errors (timeouts, connection refused, etc.) // Test timing variables let testStartTime = 0; let testEndTime = 0; // Added for precise end time let testDurationActual = 0; // Interval IDs for live stats updates and system metrics let liveStatsIntervalId = null; let systemMetricsIntervalId = null; // History for system resource usage and live graphs const HISTORY_LENGTH = 30; // Number of points for ASCII graphs let cpuUsageHistory = []; // Stores CPU usage percentage at each sample let memUsageHistory = []; // Stores used memory in bytes at each sample let liveSuccessRateHistory = []; // Stores success rate % for live graph let liveErrorRateHistory = []; // Stores total error rate % for live graph let liveAvgLatencyHistory = []; // Stores average latency for live graph let liveDataReceivedRateHistory = []; // Stores data received rate for live graph // Buffer for recent latencies for live average calculation (includes all responses) const LIVE_LATENCY_BUFFER_SIZE = 100; // Keep track of last 100 latencies let liveLatencyBuffer = []; // Objects to store counts for status codes and error types let statusCodes = {}; // e.g., { '200': 150, '404': 5 } - counts ALL status codes received let errorTypes = {}; // e.g., { 'TIMEOUT': 10, 'ECONNREFUSED': 3, '4xx_CLIENT_ERROR': 5, '5xx_SERVER_ERROR': 2 } - tracks network errors and categorized HTTP errors let httpMethods = {}; // e.g., { 'GET': 100, 'POST': 5 } let responseContentTypes = {}; // e.g., { 'application/json': 50 } let responseServers = {}; // e.g., { 'nginx': 50 } // Configuration object to store user inputs let config = {}; // Flag to ensure report is generated only once let reportGenerated = false; // --- Utility Functions --- /** * Formats a number of bytes into a human-readable string (e.g., 1024 -> 1 KB). * @param {number} bytes The number of bytes to format. * @returns {string} The formatted string. */ const formatBytes = (bytes) => { const sizes = ['B', 'KB', 'MB', 'GB', 'TB']; if (bytes === 0) return '0 B'; const i = parseInt(Math.floor(Math.log(bytes) / Math.log(1024)), 10); return `${(bytes / Math.pow(1024, i)).toFixed(2)} ${sizes[i]}`; }; /** * Formats a number of milliseconds into a human-readable string (e.g., 1234567 -> 20.58 min). * @param {number} ms The number of milliseconds to format. * @returns {string} The formatted string. */ const formatMs = (ms) => { if (ms < 1000) return `${ms.toFixed(2)} ms`; const seconds = ms / 1000; if (seconds < 60) return `${seconds.toFixed(2)} s`; const minutes = seconds / 60; if (minutes < 60) return `${minutes.toFixed(2)} min`; const hours = minutes / 60; return `${hours.toFixed(2)} hr`; }; /** * Calculates the specified percentile for a sorted array of numbers. * @param {number[]} arr The array of numbers, which must be sorted in ascending order. * @param {number} p The percentile to calculate (e.g., 90 for 90th percentile). * @returns {number} The calculated percentile value. Returns 0 if the array is empty. */ const calculatePercentile = (arr, p) => { if (!arr || arr.length === 0) return 0; const index = (p / 100) * (arr.length - 1); const lower = Math.floor(index); const upper = Math.ceil(index); if (lower === upper) return arr[lower]; // Exact match const weight = index - lower; // Linear interpolation between the two closest data points return arr[lower] * (1 - weight) + arr[upper] * weight; }; /** * Calculates the standard deviation of an array of numbers. * Uses the sample standard deviation formula (N-1 in the denominator). * @param {number[]} arr The array of numbers. * @returns {number} The calculated standard deviation. Returns 0 if the array has less than 2 elements. */ const calculateStdDev = (arr) => { if (!arr || arr.length < 2) return 0; const mean = arr.reduce((sum, val) => sum + val, 0) / arr.length; const variance = arr.reduce((sum, val) => sum + Math.pow(val - mean, 2), 0) / (arr.length - 1); return Math.sqrt(variance); }; // Variable to store the last CPU usage snapshot for calculating difference let lastCpuUsage = getCpuUsage(); // Initial capture /** * Gets the current CPU times (idle and total tick) from os.cpus(). * Handles cases where os.cpus() might return an empty array or invalid data. * @returns {object} An object containing `idle` (total idle time), `tick` (total CPU time), and `available` (boolean). */ function getCpuUsage() { const cpus = os.cpus(); if (!cpus || cpus.length === 0) { // If no CPU info, return a state that indicates unavailability return { idle: 0, tick: 0, available: false }; } let totalIdle = 0; let totalTick = 0; for (const cpu of cpus) { for (const type in cpu.times) { totalTick += cpu.times[type]; } totalIdle += cpu.times.idle; } return { idle: totalIdle, tick: totalTick, available: true }; } // Keep track of the last total data received for calculating rate let lastTotalDataReceived = 0; /** * Samples current CPU and Memory usage and stores them in history arrays. * This function is called periodically during the test. * It calculates CPU usage percentage over the sampling interval. */ const sampleSystemMetrics = () => { // CPU Usage Calculation const currentCpuUsage = getCpuUsage(); // If CPU data is not available or it's the very first sample where lastCpuUsage is also zero, // push 0 or handle it gracefully. if (!currentCpuUsage.available || (lastCpuUsage.tick === 0 && lastCpuUsage.idle === 0 && currentCpuUsage.tick === 0 && currentCpuUsage.idle === 0)) { cpuUsageHistory.push(0); // Push 0 if CPU data is consistently unavailable or initial state } else { const idleDifference = currentCpuUsage.idle - lastCpuUsage.idle; const totalDifference = currentCpuUsage.tick - lastCpuUsage.tick; // Calculate CPU percentage: (total - idle) / total // Ensure totalDifference is not zero to prevent division by zero const cpuPercentage = totalDifference > 0 ? ((totalDifference - idleDifference) / totalDifference) * 100 : 0; cpuUsageHistory.push(cpuPercentage); } lastCpuUsage = currentCpuUsage; // Update last snapshot for next calculation // Memory Usage Calculation const totalMem = os.totalmem(); const freeMem = os.freemem(); const usedMem = totalMem - freeMem; // Calculate used memory memUsageHistory.push(usedMem); // Keep history arrays to a fixed length if (cpuUsageHistory.length > HISTORY_LENGTH) cpuUsageHistory.shift(); if (memUsageHistory.length > HISTORY_LENGTH) memUsageHistory.shift(); // Update live graph histories const currentTotalRequests = totalRequestsSent; // Use totalRequestsSent for overall count const currentSuccessRate = currentTotalRequests > 0 ? (successCount / currentTotalRequests) * 100 : 0; // For live error rate, only consider 4xx, 5xx, and network errors. Redirects are not "errors" here. const currentErrorRate = currentTotalRequests > 0 ? ((clientErrorCount + serverErrorCount + networkErrorCount) / currentTotalRequests) * 100 : 0; liveSuccessRateHistory.push(currentSuccessRate); liveErrorRateHistory.push(currentErrorRate); // Calculate average latency from the dedicated liveLatencyBuffer const currentAvgLatency = liveLatencyBuffer.length > 0 ? (liveLatencyBuffer.reduce((sum, l) => sum + l, 0) / liveLatencyBuffer.length) : 0; liveAvgLatencyHistory.push(currentAvgLatency); // Calculate data received rate for the live graph const dataReceivedInInterval = totalDataReceived - lastTotalDataReceived; const dataReceivedRate = dataReceivedInInterval / 1024; // in KB/sec (since sample interval is 1s) liveDataReceivedRateHistory.push(dataReceivedRate); lastTotalDataReceived = totalDataReceived; if (liveSuccessRateHistory.length > HISTORY_LENGTH) liveSuccessRateHistory.shift(); if (liveErrorRateHistory.length > HISTORY_LENGTH) liveErrorRateHistory.shift(); if (liveAvgLatencyHistory.length > HISTORY_LENGTH) liveAvgLatencyHistory.shift(); if (liveDataReceivedRateHistory.length > HISTORY_LENGTH) liveDataReceivedRateHistory.shift(); }; /** * Generates an ASCII line chart from a history array with dynamic scaling. * @param {number[]} history The array of numbers to plot. * @param {string} label The label for the chart. * @param {string} color The ANSI color for the line. * @param {string} unit The unit for the value (e.g., '%', 'ms', 'KB/s'). * @returns {string} The ASCII line chart string. */ const generateAsciiLineChart = (history, label, color, unit = '') => { if (history.length === 0) return ` ${label}: ${style('No data', ANSI.DIM)}`; const chartHeight = 5; // Height of the chart in characters const chartWidth = HISTORY_LENGTH; // Width of the chart // Dynamic scaling based on current max value in history const currentMax = Math.max(...history); const maxValue = currentMax > 0 ? currentMax * 1.1 : 100; // 10% buffer or 100 if all zeros const scaledHistory = history.map(val => Math.min(chartHeight - 1, Math.floor((val / maxValue) * (chartHeight - 1)))); // Get the latest value for display const latestValue = history[history.length - 1].toFixed(1); const fullLabel = `${label} (${latestValue}${unit})`; let chart = ` ${style(fullLabel.padEnd(20), ANSI.BOLD, color)} ${style('─'.repeat(chartWidth + 2), ANSI.DIM)}\n`; // Initialize chart grid with spaces const grid = Array(chartHeight).fill(0).map(() => Array(chartWidth).fill(' ')); // Plot points for (let i = 0; i < scaledHistory.length; i++) { const y = chartHeight - 1 - scaledHistory[i]; // Invert Y-axis for console if (i < chartWidth) { // Ensure we don't go out of bounds for chartWidth grid[y][i] = style(config.graphPointChar, color); // Plot point with custom character } } // Render grid for (let y = 0; y < chartHeight; y++) { chart += ` ${style('|', ANSI.DIM)}${grid[y].join('')}${style('|', ANSI.DIM)}\n`; } chart += ` ${style('─'.repeat(chartWidth + 2), ANSI.DIM)}\n`; return chart; }; /** * Prints live statistics to the console, updating the current line. * @param {number} totalExpectedRequests The total number of requests planned for the test. */ const printLiveStats = (totalExpectedRequests) => { if (config.logLevel === 'silent') return; const elapsed = (performance.now() - testStartTime) / 1000; const percent = ((totalRequestsSent / totalExpectedRequests) * 100).toFixed(1); const speed = elapsed > 0 ? (totalRequestsSent / elapsed).toFixed(2) : 0; // Use liveLatencyBuffer for live average latency const avgLatency = liveLatencyBuffer.length > 0 ? (liveLatencyBuffer.reduce((sum, r) => sum + r, 0) / liveLatencyBuffer.length).toFixed(2) : 'N/A'; // Live error rate only considers 4xx, 5xx, and network errors. const errorRate = totalRequestsSent > 0 ? ((clientErrorCount + serverErrorCount + networkErrorCount) / totalRequestsSent * 100).toFixed(1) : '0.0'; // Get the most recent CPU and Memory usage samples const currentCpu = cpuUsageHistory.length > 0 ? cpuUsageHistory[cpuUsageHistory.length - 1].toFixed(1) : 'N/A'; const currentMem = memUsageHistory.length > 0 ? formatBytes(memUsageHistory[memUsageHistory.length - 1]) : 'N/A'; const currentDataRate = liveDataReceivedRateHistory.length > 0 ? liveDataReceivedRateHistory[liveDataReceivedRateHistory.length - 1].toFixed(2) : 'N/A'; // --- Terminal Clearing and Cursor Positioning --- // Calculate the number of lines the live display takes: // 1 (progress bar) + 1 (system metrics) + 1 (newline) + 1 (heatmap label) + 1 (heatmap bars) + 5 * (1 (graph label) + 5 (chart height) + 1 (bottom line)) // Total: 1 + 1 + 1 + 1 + 1 + 5 * 7 = 5 + 35 = 40 lines. // Let's be precise: const numLinesPerGraph = 1 + 5 + 1; // Label + height + bottom line const numLinesToClear = 1 // Progress bar + 1 // System metrics + 1 // Newline before heatmap + 1 // Heatmap label + 1 // Heatmap bars + (numLinesPerGraph * 5); // 5 graphs // Move cursor up to the start of the live display area readline.moveCursor(process.stdout, 0, -numLinesToClear); // Clear from cursor down (clears the entire live display area) readline.clearScreenDown(process.stdout); // Move cursor back to the beginning of the first line for the next update (redundant after clearScreenDown, but safe) readline.cursorTo(process.stdout, 0, 0); // --- Advanced Progress Bar --- const progressBarWidth = 50; // Increased width for more info const progressRatio = (totalRequestsSent / totalExpectedRequests); const filledChars = Math.floor(progressRatio * progressBarWidth); const emptyChars = progressBarWidth - filledChars; // Dynamic color for progress bar based on error rate let progressBarColorBg = ANSI.hexBg('#28a745'); // Green for low error if (parseFloat(errorRate) > 5) progressBarColorBg = ANSI.hexBg('#ffc107'); // Yellow for moderate error if (parseFloat(errorRate) > 20) progressBarColorBg = ANSI.hexBg('#dc3545'); // Red for high error const progressBarText = ` ${percent}% | RPS: ${speed} | Lat: ${avgLatency}ms | Err: ${errorRate}% `; const textPadding = Math.max(0, progressBarWidth - progressBarText.length); const paddedProgressBarText = progressBarText + ' '.repeat(textPadding); const progressBar = style(paddedProgressBarText.substring(0, filledChars), progressBarColorBg, ANSI.BRIGHT_WHITE, ANSI.BOLD) + style(paddedProgressBarText.substring(filledChars), ANSI.hexBg('#333333'), ANSI.BRIGHT_BLACK); // --- Status Code Heatmap --- const totalCurrentRequests = totalRequestsSent; // Use totalRequestsSent for the denominator const p2xx = totalCurrentRequests > 0 ? (successCount / totalCurrentRequests * 100) : 0; const p3xx = totalCurrentRequests > 0 ? (redirectCount / totalCurrentRequests * 100) : 0; // 3xx percentage const p4xx = totalCurrentRequests > 0 ? (clientErrorCount / totalCurrentRequests * 100) : 0; const p5xx = totalCurrentRequests > 0 ? (serverErrorCount / totalCurrentRequests * 100) : 0; const pNetErr = totalCurrentRequests > 0 ? (networkErrorCount / totalCurrentRequests * 100) : 0; const heatmapBarWidth = 15; const getHeatmapBar = (percentage, colorChar, colorBg) => { const filled = Math.floor(percentage / 100 * heatmapBarWidth); const empty = heatmapBarWidth - filled; return style(colorChar.repeat(filled), colorBg) + ' '.repeat(empty); }; const heatmap2xx = getHeatmapBar(p2xx, '🟩', ANSI.hexBg('#28a745')); // Green for success const heatmap3xx = getHeatmapBar(p3xx, '🟪', ANSI.hexBg('#6f42c1')); // Purple for redirects const heatmap4xx = getHeatmapBar(p4xx, '🟨', ANSI.hexBg('#ffc107')); // Yellow for client errors const heatmap5xx = getHeatmapBar(p5xx, '🟥', ANSI.hexBg('#dc3545')); // Red for server errors const heatmapNetErr = getHeatmapBar(pNetErr, '💥', ANSI.hexBg('#6c757d')); // Gray for network errors const lines = []; // Line 1: Main Progress Bar (now includes key metrics) lines.push( ` ${progressBar}` ); // Line 2: System Metrics (more compact) lines.push( style(` 💻 CPU: ${currentCpu}% `, ANSI.hexFg('#007bff')) + // Blue for CPU style(` 🧠 Mem: ${currentMem} `, ANSI.hexFg('#fd7e14')) + // Orange for memory style(` ⬇️ Data: ${currentDataRate}KB/s `, ANSI.hexFg('#20c997')) // Teal for data rate ); // Line 3: Status Code Heatmap lines.push( `\n ${style('Status Heatmap:', ANSI.BRIGHT_WHITE)}` + ` 2xx: ${heatmap2xx} ${style(`${p2xx.toFixed(1)}%`, ANSI.hexFg('#28a745'))}` + ` 3xx: ${heatmap3xx} ${style(`${p3xx.toFixed(1)}%`, ANSI.hexFg('#6f42c1'))}` + ` 4xx: ${heatmap4xx} ${style(`${p4xx.toFixed(1)}%`, ANSI.hexFg('#ffc107'))}` + ` 5xx: ${heatmap5xx} ${style(`${p5xx.toFixed(1)}%`, ANSI.hexFg('#dc3545'))}` + ` Net: ${heatmapNetErr} ${style(`${pNetErr.toFixed(1)}%`, ANSI.hexFg('#6c757d'))}` ); // Line 4-N: Live ASCII Graphs (with percentage/value in label) lines.push(generateAsciiLineChart(liveSuccessRateHistory, 'Success', ANSI.hexFg('#28a745'), '%')); lines.push(generateAsciiLineChart(liveErrorRateHistory, 'Error', ANSI.hexFg('#dc3545'), '%')); lines.push(generateAsciiLineChart(cpuUsageHistory, 'CPU', ANSI.hexFg('#007bff'), '%')); lines.push(generateAsciiLineChart(liveAvgLatencyHistory, 'Avg Latency', ANSI.hexFg('#6f42c1'), 'ms')); lines.push(generateAsciiLineChart(liveDataReceivedRateHistory, 'Data Recv', ANSI.hexFg('#20c997'), 'KB/s')); // Write all lines at once process.stdout.write(lines.join('\n')); // Move cursor back to the beginning of the first line for the next update // This is crucial for the next redraw to happen at the correct position readline.cursorTo(process.stdout, 0, 0); }; // --- Core Load Testing Logic --- /** * Makes a single HTTP/HTTPS request to the target URL. * Records latency, status code, data transferred, and any errors. * @param {URL} parsedUrl The parsed URL object (from `new URL()`). * @param {string} method The HTTP method to use (e.g., 'GET', 'POST'). * @param {string|null} requestBody The request body string, or null if no body. * @param {object} customHeaders User-defined custom headers. * @returns {Promise<void>} A Promise that resolves when the request completes (success or failure). */ const makeRequest = (parsedUrl, method, requestBody = null, customHeaders = {}) => { // Determine which agent (http or https) to use based on protocol const agent = parsedUrl.protocol === 'https:' ? https : http; // Standard request headers const defaultHeaders = { 'User-Agent': 'SmartLoadTestBot/14.7 (Node.js)', // Custom User-Agent (updated to V14.7) 'Accept': '*/*', // Accept all content types 'Connection': 'keep-alive' // Keep connection alive for performance }; // Merge default headers with custom headers. Custom headers override defaults. let requestHeaders = { ...defaultHeaders, ...customHeaders }; let bodyBuffer = null; if (requestBody) { try { // Ensure the body is a string, convert if it's an object const bodyString = typeof requestBody === 'object' ? JSON.stringify(requestBody) : requestBody; bodyBuffer = Buffer.from(bodyString, 'utf8'); requestHeaders['Content-Type'] = requestHeaders['Content-Type'] || 'application/json'; // Set if not already set by custom headers requestHeaders['Content-Length'] = bodyBuffer.length; } catch (e) { if (config.logLevel !== 'silent') { console.error(style(`Error preparing request body: ${e.message}`, ANSI.hexFg('#dc3545'))); } // Proceed without body if there's an error bodyBuffer = null; delete requestHeaders['Content-Type']; delete requestHeaders['Content-Length']; } } // Request options const options = { method: method, // Use the provided HTTP method hostname: parsedUrl.hostname, port: parsedUrl.port || (parsedUrl.protocol === 'https:' ? 443 : 80), // Default ports path: parsedUrl.pathname + parsedUrl.search, // Correct way to get full path with query headers: requestHeaders, timeout: 15000 // Request timeout in milliseconds (15 seconds) }; // Estimate header size for data sent calculation // This is a rough estimation as actual header size can vary const estimatedHeaderSize = Buffer.byteLength( Object.entries(requestHeaders) .map(([k, v]) => `${k}: ${v}`) .join('\r\n') + `${options.method} ${options.path} HTTP/1.1\r\nHost: ${options.hostname}\r\n\r\n` ); totalDataSent += estimatedHeaderSize + (bodyBuffer ? bodyBuffer.length : 0); // Accumulate total data sent // Track HTTP method used httpMethods[options.method] = (httpMethods[options.method] || 0) + 1; return new Promise(resolve => { const reqStart = performance.now(); // Mark request start time let dnsLookupStart = 0, dnsLookupEnd = 0; let tcpConnectStart = 0, tcpConnectEnd = 0; let tlsHandshakeStart = 0, tlsHandshakeEnd = 0; const req = agent.request(options, (res) => { let currentBytesReceived = 0; // Ensure network timings are captured when response is received const now = performance.now(); if (parsedUrl.protocol === 'https:') { // For HTTPS, TLS handshake ends when response headers are received if (tlsHandshakeStart > 0 && tlsHandshakeEnd === 0) tlsHandshakeEnd = now; } else { // For HTTP, TCP connect ends when response headers are received if (tcpConnectStart > 0 && tcpConnectEnd === 0) tcpConnectEnd = now; } // Track response headers for analysis const responseHeaders = res.headers; const contentType = responseHeaders['content-type'] ? responseHeaders['content-type'].split(';')[0].toLowerCase() : 'N/A'; const serverHeader = responseHeaders['server'] ? responseHeaders['server'].toLowerCase() : 'N/A'; responseContentTypes[contentType] = (responseContentTypes[contentType] || 0) + 1; responseServers[serverHeader] = (responseServers[serverHeader] || 0) + 1; // Listen for data chunks and accumulate received bytes res.on('data', chunk => { currentBytesReceived += chunk.length; }); // Listen for request end res.on('end', () => { const latency = performance.now() - reqStart; // Calculate latency totalDataReceived += currentBytesReceived; // Accumulate total data received // Record request result const statusCode = res.statusCode; let errorType = null; // This will only be set for true errors (4xx, 5xx, network) if (statusCode >= 200 && statusCode < 300) { successCount++; // Increment overall success counter } else if (statusCode >= 300 && statusCode < 400) { redirectCount++; // Increment redirect counter // Note: For final report's "Error Breakdown", we still list 3xx, // but they are explicitly excluded from the main "Error Rate" calculation. errorTypes[`HTTP_REDIRECT_${statusCode}`] = (errorTypes[`HTTP_REDIRECT_${statusCode}`] || 0) + 1; errorTypes['HTTP_REDIRECT'] = (errorTypes['HTTP_REDIRECT'] || 0) + 1; } else if (statusCode >= 400 && statusCode < 500) { clientErrorCount++; // Increment 4xx error counter errorType = `HTTP_CLIENT_ERROR_${statusCode}`; // More specific HTTP error errorTypes['HTTP_CLIENT_ERROR'] = (errorTypes['HTTP_CLIENT_ERROR'] || 0) + 1; errorTypes[`HTTP_CLIENT_ERROR_${statusCode}`] = (errorTypes[`HTTP_CLIENT_ERROR_${statusCode}`] || 0) + 1; } else if (statusCode >= 500 && statusCode < 600) { serverErrorCount++; // Increment 5xx error counter errorType = `HTTP_SERVER_ERROR_${statusCode}`; // More specific HTTP error errorTypes['HTTP_SERVER_ERROR'] = (errorTypes['HTTP_SERVER_ERROR'] || 0) + 1; errorTypes[`HTTP_SERVER_ERROR_${statusCode}`] = (errorTypes[`HTTP_SERVER_ERROR_${statusCode}`] || 0) + 1; } else { // For other unexpected codes, categorize as unknown error errorType = `HTTP_UNKNOWN_ERROR_${statusCode}`; errorTypes['HTTP_UNKNOWN_ERROR'] = (errorTypes['HTTP_UNKNOWN_ERROR'] || 0) + 1; } // Push latency to live buffer for ANY request that received an HTTP status code liveLatencyBuffer.push(latency); if (liveLatencyBuffer.length > LIVE_LATENCY_BUFFER_SIZE) { liveLatencyBuffer.shift(); // Remove oldest if buffer full } requestResults.push({ latency: latency, statusCode: statusCode, errorType: errorType, // This will be null for 2xx and 3xx, set for 4xx/5xx/network bytesReceived: currentBytesReceived, bytesSent: estimatedHeaderSize + (bodyBuffer ? bodyBuffer.length : 0), method: options.method, responseHeaders: responseHeaders, // Store full headers for potential future analysis dnsLookupMs: dnsLookupEnd > 0 ? dnsLookupEnd - dnsLookupStart : null, tcpConnectMs: tcpConnectEnd > 0 ? tcpConnectEnd - tcpConnectStart : null, tlsHandshakeMs: tlsHandshakeEnd > 0 ? tlsHandshakeEnd - tlsHandshakeStart : null, }); // Increment status code counter (for all received status codes) statusCodes[statusCode] = (statusCodes[statusCode] || 0) + 1; activeRequests--; // Decrement active request count resolve(); // Resolve the promise }); }); req.once('socket', (socket) => { socket.once('lookup', () => { dnsLookupStart = performance.now(); }); socket.once('connect', () => { dnsLookupEnd = performance.now(); // DNS lookup ends when connect starts tcpConnectStart = performance.now(); }); if (parsedUrl.protocol === 'https:') { socket.once('secureConnect', () => { tcpConnectEnd = performance.now(); // TCP connect ends when TLS handshake starts tlsHandshakeStart = performance.now(); }); } }); // Listen for request errors (network errors, timeouts) req.on('error', (err) => { const now = performance.now(); // Ensure timings are recorded even on error if (dnsLookupStart > 0 && dnsLookupEnd === 0) dnsLookupEnd = now; if (tcpConnectStart > 0 && tcpConnectEnd === 0) tcpConnectEnd = now; if (tlsHandshakeStart > 0 && tlsHandshakeEnd === 0) tlsHandshakeEnd = now; const errorName = err.code || 'UNKNOWN_NETWORK_ERROR'; // Use error code or 'UNKNOWN_NETWORK_ERROR' errorTypes[errorName] = (errorTypes[errorName] || 0) + 1; // Increment error type counter networkErrorCount++; // Increment network error counter // Record failed request result requestResults.push({ latency: performance.now() - reqStart, statusCode: null, // No status code on network error errorType: errorName, // This is a true error bytesReceived: 0, bytesSent: estimatedHeaderSize + (bodyBuffer ? bodyBuffer.length : 0), method: options.method, responseHeaders: {}, // No response headers on error dnsLookupMs: dnsLookupEnd > 0 ? dnsLookupEnd - dnsLookupStart : null, tcpConnectMs: tcpConnectEnd > 0 ? tcpConnectEnd - tcpConnectStart : null, tlsHandshakeMs: tlsHandshakeEnd > 0 ? tlsHandshakeEnd - tlsHandshakeStart : null, }); activeRequests--; // Decrement active request count resolve(); // Resolve the promise }); // Listen for request timeout req.on('timeout', () => { const now = performance.now(); // Ensure timings are recorded even on timeout if (dnsLookupStart > 0 && dnsLookupEnd === 0) dnsLookupEnd = now; if (tcpConnectStart > 0 && tcpConnectEnd === 0) tcpConnectEnd = now; if (tlsHandshakeStart > 0 && tlsHandshakeEnd === 0) tlsHandshakeEnd = now; errorTypes['TIMEOUT'] = (errorTypes['TIMEOUT'] || 0) + 1; // Increment timeout counter networkErrorCount++; // Increment network error counter // Record timeout error result requestResults.push({ latency: performance.now() - reqStart, statusCode: null, errorType: 'TIMEOUT', // This is a true error bytesReceived: 0, bytesSent: estimatedHeaderSize + (bodyBuffer ? bodyBuffer.length : 0), method: options.method, responseHeaders: {}, // No response headers on timeout dnsLookupMs: dnsLookupEnd > 0 ? dnsLookupEnd - dnsLookupStart : null, tcpConnectMs: tcpConnectEnd > 0 ? tcpConnectEnd - tcpConnectStart : null, tlsHandshakeMs: tlsHandshakeEnd > 0 ? tlsHandshakeEnd - tlsHandshakeStart : null, }); req.abort(); // Abort the request activeRequests--; // Decrement active request count resolve(); // Resolve the promise }); if (bodyBuffer) { req.write(bodyBuffer); // Write the request body } req.end(); // Send the request activeRequests++; // Increment active request count // Update peak active requests if (activeRequests > peakActiveRequests) { peakActiveRequests = activeRequests; } totalRequestsSent++; // Increment total requests sent count }); }; /** * Runs the main load test loop through defined stages. * Controls the rate of requests (RPS) and concurrency (users) dynamically per stage. * @param {object} testConfig The configuration object for the test. */ const runLoad = async (testConfig) => { const { targetUrl, stages, httpMethod, requestBody, customHeaders } = testConfig; let parsedUrl; try { parsedUrl = new URL(targetUrl); } catch (e) { if (config.logLevel !== 'silent') { console.error(style(`❌ Critical Error: Could not parse URL "${targetUrl}". Please ensure it's a valid and complete URL (e.g., https://example.com).`, ANSI.hexFg('#dc3545'))); } process.exit(1); } testStartTime = performance.now(); // Mark the start of the test // Calculate total expected requests across all stages config.totalExpectedRequests = stages.reduce((sum, stage) => sum + (stage.duration * stage.targetRPS), 0); // Start sampling system metrics every second systemMetricsIntervalId = setInterval(sampleSystemMetrics, 1000); // Start updating live stats on the console every 500ms if (config.logLevel !== 'silent') { // Initial clear for the first render // This ensures the initial prompt lines are pushed up and the live display starts from a clean slate process.stdout.write('\n'.repeat(30)); // Push initial prompts up, assuming enough lines liveStatsIntervalId = setInterval(() => printLiveStats(config.totalExpectedRequests), 500); } for (const [index, stage] of stages.entries()) { if (config.logLevel !== 'silent') { // Clear the live stats area before printing stage info const numLinesPerGraph = 1 + 5 + 1; // Label + height + bottom line const numLinesToClear = 1 // Progress bar + 1 // System metrics + 1 // Newline before heatmap + 1 // Heatmap label + 1 // Heatmap bars + (numLinesPerGraph * 5); // 5 graphs readline.moveCursor(process.stdout, 0, -numLinesToClear); readline.clearScreenDown(process.stdout); readline.cursorTo(process.stdout, 0, 0); // Reset cursor to top-left console.log(style(`\nStarting Stage ${index + 1}/${stages.length}: Duration ${stage.duration}s, Target RPS ${stage.targetRPS}, Target Users ${stage.targetUsers}\n`, ANSI.hexFg('#007bff'))); // Add some newlines after stage info to ensure live stats don't overwrite it immediately process.stdout.write('\n'.repeat(numLinesToClear)); // Reserve space for live stats readline.cursorTo(process.stdout, 0, process.stdout.rows - numLinesToClear); // Move cursor to the start of the reserved area } const currentStageStartTime = performance.now(); const stageEndTimeMs = currentStageStartTime + stage.duration * 1000; let stageRequestCounter = 0; const stageRPS = stage.targetRPS; const stageUsers = stage.targetUsers; const requestIntervalMs = stageRPS > 0 ? (1000 / stageRPS) : Infinity; // Avoid division by zero let lastRequestTime = performance.now(); while (performance.now() < stageEndTimeMs && (stageRPS === 0 || stageRequestCounter < (stage.duration * stageRPS))) { // Concurrency control: Wait if the number of active requests exceeds the concurrent user limit while (activeRequests >= stageUsers) { await new Promise(resolve => setTimeout(resolve, 10)); // Pause briefly to allow requests to complete } // Rate limiting: Ensure requests are sent at the desired RPS const now = performance.now(); const timeToWait = requestIntervalMs - (now - lastRequestTime); // Time remaining until next request should be sent if (timeToWait > 0) { await new Promise(resolve => setTimeout(resolve, timeToWait)); // Wait if needed } lastRequestTime = performance.now(); // Update last request time // Initiate a new request (non-blocking) makeRequest(parsedUrl, httpMethod, requestBody, customHeaders); // Pass method, body, and custom headers stageRequestCounter++; // Increment the counter for requests initiated in this stage } // After each stage, wait for any remaining active requests from this stage to complete // This prevents requests from one stage spilling over and affecting the next stage's metrics too much if (config.logLevel !== 'silent') { // Clear live stats area before printing completion message const numLinesPerGraph = 1 + 5 + 1; const numLinesToClear = 1 + 1 + 1 + 1 + 1 + (numLinesPerGraph * 5); readline.moveCursor(process.stdout, 0, -numLinesToClear); readline.clearScreenDown(process.stdout); readline.cursorTo(process.stdout, 0, 0); // Reset cursor to top-left console.log(style(`\nStage ${index + 1} completed. Waiting for pending requests...\n`, ANSI.hexFg('#adb5bd'))); process.stdout.write('\n'.repeat(numLinesToClear)); // Reserve space for live stats readline.cursorTo(process.stdout, 0, process.stdout.rows - numLinesToClear); // Move cursor to the start of the reserved area } while (activeRequests > 0 && performance.now() < stageEndTimeMs + 5000) { // Add a small buffer for pending requests await new Promise(resolve => setTimeout(resolve, 100)); } } // After all stages, ensure all remaining active requests are completed if (config.logLevel !== 'silent') { // Clear live stats area before printing final completion message const numLinesPerGraph = 1 + 5 + 1; const numLinesToClear = 1 + 1 + 1 + 1 + 1 + (numLinesPerGraph * 5); readline.moveCursor(process.stdout, 0, -numLinesToClear); readline.clearScreenDown(process.stdout); readline.cursorTo(process.stdout, 0, 0); // Reset cursor to top-left console.log(style('\nAll stages completed. Waiting for final pending requests...\n', ANSI.hexFg('#adb5bd'))); // No need to reserve space after this, as report will be printed } while (activeRequests > 0) { await new Promise(resolve => setTimeout(resolve, 100)); } testEndTime = performance.now(); // Record the precise end time }; /** * Creates a styled banner for the report. * @param {string} title The title of the banner. * @param {string} colorHexFg Hex code for the foreground color. * @param {string} colorHexBg Hex code for the background color. * @returns {string} Styled banner string. */ const createBanner = (title, colorHexFg, colorHexBg) => { const lineLength = 91; // Width of the banner const line = '═'.repeat(lineLength); const paddingLength = Math.floor((lineLength - title.length) / 2); const padding = ' '.repeat(paddingLength); const extraSpace = (title.length % 2 !== 0) ? ' ' : ''; // Extra space for odd length titles return style( `╔${line}╗\n` + `║${padding}${style(title, ANSI.BOLD, ANSI.BRIGHT_WHITE, ANSI.hexBg(colorHexBg))}${padding}${extraSpace}║\n` + `╚${line}╝`, ANSI.BOLD, ANSI.hexFg(colorHexFg) ); }; /** * Creates a styled header for report sections. * @param {string} title The title of the section. * @param {string} colorFg ANSI code for the foreground color. * @param {string} dividerChar Divider character (e.g., '─', '═'). * @returns {string} Styled section header string. */ const createSectionHeader = (title, colorFg, dividerChar = '─') => { const dividerLength = 80; // Width of the divider const divider = dividerChar.repeat(Math.max(0, dividerLength - title.length - 4)); // 4 for "─── " and " " return style(`\n\n─── ${title} ${divider}\n`, ANSI.BOLD, colorFg); }; /** * Generates an ASCII histogram for latency distribution. * @param {number[]} latencies Sorted array of latencies. * @returns {string} ASCII histogram string. */ const generateLatencyHistogram = (latencies) => { if (latencies.length === 0) return style(' No latency data to generate histogram.', ANSI.DIM); // Define buckets const buckets = [ { label: '0-50ms', max: 50, count: 0 }, { label: '51-100ms', max: 100, count: 0 }, { label: '101-250ms', max: 250, count: 0 }, { label: '251-500ms', max: 500, count: 0 }, { label: '501-1000ms', max: 1000, count: 0 }, { label: '1001-2000ms', max: 2000, count: 0 }, { label: '>2000ms', max: Infinity, count: 0 } ]; // Populate buckets latencies.forEach(latency => { for (let i = 0; i < buckets.length; i++) { if (latency <= buckets[i].max) { buckets[i].count++; break; } } }); const maxCount = Math.max(...buckets.map(b => b.count)); const histogramWidth = 40; // Max width of the bar let histogramOutput = ''; buckets.forEach(bucket => { const barLength = maxCount > 0 ? Math.round((bucket.count / maxCount) * histogramWidth) : 0; const bar = '█'.repeat(barLength); const percentage = (bucket.count / latencies.length * 100).toFixed(1); let barColor = ANSI.hexFg('#28a745'); // Green if (bucket.max > 500) barColor = ANSI.hexFg('#dc3545'); // Red else if (bucket.max > 100) barColor = ANSI.hexFg('#ffc107'); // Yellow histogramOutput += ` ${style(bucket.label.padEnd(12), ANSI.hexFg('#17a2b8'))} | ${style(bar.padEnd(histogramWidth), barColor)} ${style(`${bucket.count} (${percentage}%)`, ANSI.hexFg('#adb5bd'))}\n`; // Light gray for counts }); return histogramOutput; }; /** * Generates a formatted table for latency percentiles. * @param {n