UNPKG

llmverify

Version:

AI Output Verification Toolkit — Local-first LLM safety, hallucination detection, PII redaction, prompt injection defense, and runtime monitoring. Zero telemetry. OWASP LLM Top 10 aligned.

40 lines (39 loc) 1.19 kB
/** * Latency Engine * * Monitors LLM response latency and detects anomalies. * Uses baseline comparison to identify performance degradation. * * WHAT THIS DOES: * ✅ Compares current latency to established baseline * ✅ Detects sudden latency spikes * ✅ Provides normalized anomaly score * * WHAT THIS DOES NOT DO: * ❌ Predict future latency * ❌ Identify root cause of latency issues * ❌ Account for network conditions * * @module engines/runtime/latency * @author Haiec * @license MIT */ import { CallRecord, EngineResult, BaselineState } from '../../types/runtime'; /** * Analyzes latency of an LLM call against baseline. * * @param call - The call record to analyze * @param baseline - Current baseline state * @param thresholds - Optional custom thresholds * @returns Engine result with latency analysis * * @example * const result = LatencyEngine(callRecord, baseline); * if (result.status === 'error') { * console.log('Latency spike detected'); * } */ export declare function LatencyEngine(call: CallRecord, baseline: Pick<BaselineState, 'avgLatencyMs'>, thresholds?: { warnRatio?: number; errorRatio?: number; }): EngineResult;