inference-server
Version:
Libraries and server to build AI applications. Adapters to various native bindings allowing local inference. Integrate it with your application, or use as a microservice.
12 lines (11 loc) • 355 B
text/typescript
export * from './api/openai/index.js'
export * from './types/index.js'
export * from './pool.js'
export * from './instance.js'
export * from './store.js'
export * from './server.js'
export * from './http.js'
// export libs that might be useful downstream
export * from './lib/loadImage.js'
export * from './lib/loadAudio.js'
export * from './lib/math.js'