inference-server
Version:
Libraries and server to build AI applications. Adapters to various native bindings allowing local inference. Integrate it with your application, or use as a microservice.
11 lines (10 loc) • 318 B
TypeScript
export * from './api/openai/index.js';
export * from './types/index.js';
export * from './pool.js';
export * from './instance.js';
export * from './store.js';
export * from './server.js';
export * from './http.js';
export * from './lib/loadImage.js';
export * from './lib/loadAudio.js';
export * from './lib/math.js';