UNPKG

agentsqripts

Version:

Comprehensive static code analysis toolkit for identifying technical debt, security vulnerabilities, performance issues, and code quality problems

116 lines (101 loc) 6.28 kB
/** * @file Match confidence calculator for security vulnerability detection * @description Calculates confidence scores for security vulnerability pattern matches * This module implements a sophisticated confidence scoring system that evaluates the * likelihood that a detected pattern represents a real security vulnerability rather * than a false positive. The scoring considers pattern specificity, contextual clues, * and code structure to provide analysts with reliable prioritization data. */ /** * Calculate confidence score for a security vulnerability pattern match * @param {RegExp} pattern - The regular expression pattern that matched * @param {string} match - The actual text that was matched by the pattern * @param {Object} context - Additional context about where the match occurred * @param {boolean} context.inFunction - Whether the match is inside a function scope * @param {boolean} context.hasUserInput - Whether user input is involved in the match * @param {string} context.fileType - The type of file where the match occurred * @returns {number} Confidence score between 0.0 and 1.0 * * Rationale: Confidence scoring is critical for reducing false positives in security * analysis. A poorly calibrated confidence system leads to alert fatigue where analysts * ignore real vulnerabilities due to noise from false positives. This algorithm combines * multiple factors that correlate with vulnerability authenticity based on security * research and practical experience. */ const calculateMatchConfidence = (pattern, match, context) => { let confidence = 0.3; // Lower base confidence to reduce false positives // Adjust based on pattern specificity - more specific patterns are generally more accurate // Rationale: Longer patterns typically include more context and are less likely to match // benign code. Short patterns like /eval/ catch many false positives, while longer patterns // that include surrounding context are more precise indicators of actual vulnerabilities. if (pattern.source.length > 20) confidence += 0.2; // Case-insensitive patterns are less specific and more prone to false positives // Rationale: Case sensitivity often indicates exact API calls or specific vulnerable // patterns. Case-insensitive matching catches more variations but also more noise. if (pattern.flags.includes('i')) confidence -= 0.1; // Function context increases confidence - vulnerabilities typically occur in executable code // Rationale: Code inside functions is more likely to be executed and therefore represent // real attack vectors. Global scope matches might be configuration or dead code. if (context.inFunction) confidence += 0.1; // User input involvement significantly increases vulnerability likelihood // Rationale: Most exploitable vulnerabilities involve untrusted user input reaching // dangerous operations. Patterns that occur near user input sources are much more // likely to represent real attack vectors than similar patterns with static data. if (context.hasUserInput) confidence += 0.2; // Enhanced file type context with more comprehensive detection // Rationale: Different file types have different security profiles. Test files often // contain intentionally vulnerable code for testing purposes, while configuration files // rarely contain exploitable code patterns. if (context.fileType === 'test' || context.filePath.includes('.test.') || context.filePath.includes('/test')) { confidence -= 0.25; // Test files often have intentional vulnerabilities for testing } if (context.fileType === 'config' || context.filePath.includes('config/') || context.filePath.includes('.config.')) { confidence -= 0.2; // Config files less likely to have exploitable code } if (context.filePath.includes('demo/') || context.filePath.includes('example/') || context.filePath.includes('tmp/')) { confidence -= 0.4; // Demo and temp files often have simplified security for illustration } if (context.filePath.includes('cli/') || context.filePath.includes('lib/') || context.filePath.includes('config/')) { confidence -= 0.2; // Core library files less likely to have user-input vulnerabilities } // Ensure confidence stays within valid bounds [0.0, 1.0] return Math.min(Math.max(confidence, 0.0), 1.0); }; /** * Calculate confidence adjustments based on surrounding code context * @param {string} surroundingCode - Code context around the vulnerability match * @param {Object} match - The vulnerability match object * @returns {number} Confidence adjustment value (-0.3 to +0.3) * * Rationale: The code surrounding a potential vulnerability provides crucial context * for determining if it's actually exploitable. Security frameworks, input validation, * and proper error handling can neutralize otherwise dangerous patterns. */ const calculateContextualConfidence = (surroundingCode, match) => { let adjustment = 0; // Look for security mitigations that reduce vulnerability likelihood // Rationale: Modern frameworks and security libraries provide built-in protections // that neutralize common vulnerability patterns. Detecting these mitigations helps // reduce false positives by recognizing when dangerous patterns are properly protected. // Input validation patterns reduce confidence in injection vulnerabilities if (/validate|sanitize|escape|filter/i.test(surroundingCode)) { adjustment -= 0.2; // Strong mitigation signal } // Try-catch blocks suggest error handling awareness if (/try\s*\{[\s\S]*catch/i.test(surroundingCode)) { adjustment -= 0.1; // Moderate mitigation signal } // Security library usage indicates security awareness if (/helmet|csurf|bcrypt|crypto/i.test(surroundingCode)) { adjustment -= 0.15; // Security-conscious code is less likely to have vulnerabilities } // Dynamic construction increases confidence in injection attacks if (/\+.*\+|concat|template|format/i.test(surroundingCode)) { adjustment += 0.2; // Dynamic string building often leads to injection vulnerabilities } return Math.min(Math.max(adjustment, -0.3), 0.3); // Limit adjustment range }; module.exports = { calculateMatchConfidence, calculateContextualConfidence };