UNPKG

node-llama-cpp

Version:

Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level

12 lines 337 B
export const ggufDefaultFetchRetryOptions = { retries: 10, factor: 2, minTimeout: 1000, maxTimeout: 1000 * 16 }; export const defaultExtraAllocationSize = 1024 * 1024 * 4; // 4MB export const noDirectSubNestingGGufMetadataKeys = [ "general.license", "tokenizer.chat_template" ]; //# sourceMappingURL=consts.js.map