UNPKG

@promptbook/azure-openai

Version:

Promptbook: Run AI apps in plain human language across multiple models and platforms

20 lines (19 loc) 1.17 kB
import type { LlmExecutionTools } from '../../../../execution/LlmExecutionTools'; import type { CacheLlmToolsOptions } from './CacheLlmToolsOptions'; /** * Intercepts LLM tools and counts total usage of the tools * * Note: It can take extended `LlmExecutionTools` and cache the * * @param llmTools LLM tools to be intercepted with usage counting, it can contain extra methods like `totalUsage` * @returns LLM tools with same functionality with added total cost counting * @public exported from `@promptbook/core` */ export declare function cacheLlmTools<TLlmTools extends LlmExecutionTools>(llmTools: TLlmTools, options?: Partial<CacheLlmToolsOptions>): TLlmTools; /** * TODO: [🧠][💸] Maybe make some common abstraction `interceptLlmTools` and use here (or use javascript Proxy?) * TODO: [🧠] Is there some meaningfull way how to test this util * TODO: [👷‍♂️] Comprehensive manual about construction of llmTools * Detailed explanation about caching strategies and appropriate storage selection for different use cases * Examples of how to combine multiple interceptors for advanced caching, logging, and usage tracking */