UNPKG

node-llama-cpp

Version:

Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level

6 lines (5 loc) 157 B
export declare function getRamUsageFromUnifiedVram(vramUsage: number, vramState: { total: number; free: number; unifiedSize: number; }): number;