UNPKG

node-llama-cpp

Version:

Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level

11 lines (10 loc) 413 B
import { Llama } from "../../bindings/Llama.js"; export declare function interactivelyAskForModel({ llama, modelsDirectory, allowLocalModels, downloadIntent, flashAttention, swaFullCache, useMmap }: { llama: Llama; modelsDirectory?: string; allowLocalModels?: boolean; downloadIntent?: boolean; flashAttention?: boolean; swaFullCache?: boolean; useMmap?: boolean; }): Promise<string>;