axiom
Version:
Axiom AI SDK provides - an API to wrap your AI calls with observability instrumentation. - offline evals - online evals
81 lines (49 loc) • 1.66 kB
Markdown
[**axiom v0.51.1**](../../README.md)
***
[axiom](../../README.md) / [evals](../README.md) / Eval
# Function: Eval()
> **Eval**\<`TInput`, `TExpected`, `TOutput`, `Name`, `Capability`, `Step`\>(`name`, `params`): `void`
Creates and registers an evaluation suite with the given name and parameters.
This function sets up a complete evaluation pipeline that will run your [EvalTask](../type-aliases/EvalTask.md)
against a collection, score the results, and provide detailed EvalCaseReport reporting.
## Type Parameters
### TInput
`TInput`
### TExpected
`TExpected`
### TOutput
`TOutput`
### Name
`Name` *extends* `string` = `string`
### Capability
`Capability` *extends* `string` = `string`
### Step
`Step` *extends* `string` = `string`
## Parameters
### name
`ValidateName`\<`Name`\>
Human-readable name for the evaluation suite
### params
`Omit`\<[`EvalParams`](../type-aliases/EvalParams.md)\<`TInput`, `TExpected`, `TOutput`\>, `"capability"` \| `"step"` \| `"scorers"`\> & `object`
[EvalParams](../type-aliases/EvalParams.md) configuration parameters for the evaluation
## Returns
`void`
## Example
```typescript
import { Eval } from 'axiom/ai/evals';
Eval('Text Generation Quality', {
capability: 'capability-name',
data: async () => [
{ input: 'Explain photosynthesis', expected: 'Plants convert light to energy...' },
{ input: 'What is gravity?', expected: 'Gravity is a fundamental force...' }
],
task: async ({ input }) => {
const result = await generateText({
model: yourModel,
prompt: input
});
return result.text;
},
scorers: [similarityScorer, factualAccuracyScorer],
});
```