UNPKG

llmverify

Version:

AI Output Verification Toolkit — Local-first LLM safety, hallucination detection, PII redaction, prompt injection defense, and runtime monitoring. Zero telemetry. OWASP LLM Top 10 aligned.

70 lines (69 loc) 1.93 kB
/** * Baseline Engine * * Maintains and updates baseline metrics for drift detection. * Uses exponential moving average for stability. * * WHAT THIS DOES: * ✅ Tracks average latency, token rate, similarity * ✅ Maintains response fingerprint baseline * ✅ Uses EMA for smooth baseline updates * * WHAT THIS DOES NOT DO: * ❌ Persist baselines across sessions (in-memory only) * ❌ Account for intentional model changes * ❌ Distinguish normal variation from anomalies * * @module engines/runtime/baseline * @author Haiec * @license MIT */ import { CallRecord, BaselineState, ResponseFingerprint } from '../../types/runtime'; /** * Manages baseline state for LLM monitoring. * * @example * const baseline = new BaselineEngine(); * * // After each call * baseline.update(callRecord, fingerprint, similarity); * * // Get current baseline * const state = baseline.get(); */ export declare class BaselineEngine { private baseline; private readonly learningRate; private readonly minSamples; /** * Creates a new BaselineEngine. * * @param learningRate - EMA learning rate (0-1, default: 0.1) * @param minSamples - Minimum samples before baseline is stable (default: 5) */ constructor(learningRate?: number, minSamples?: number); /** * Updates baseline with new call data. * * @param call - The call record * @param fingerprint - Response fingerprint * @param similarity - Similarity score (0-1) */ update(call: CallRecord, fingerprint: ResponseFingerprint, similarity?: number): void; /** * Gets current baseline state. */ get(): BaselineState; /** * Checks if baseline has enough samples to be stable. */ isStable(): boolean; /** * Resets baseline to initial state. */ reset(): void; /** * Gets sample count. */ getSampleCount(): number; }