zerolabel
Version:
Zero-shot multimodal classification SDK - classify text and images with custom labels, no training required
341 lines (256 loc) • 8.4 kB
Markdown
<div align="center">
<img src="https://iili.io/Fzk26kG.jpg" alt="zerolabel" width="100" height="100">
<h1>zerolabel</h1>
<p><strong>Zero-shot classification made ridiculously simple</strong></p>
[](https://badge.fury.io/js/zerolabel)
[](https://www.typescriptlang.org/)
</div>
## ✨ What if you could classify **anything** without training models?
```typescript
import { classify } from 'zerolabel';
// Classify single or multiple texts at once
const results = await classify({
texts: [
'I love this product!',
'This is terrible quality',
'Not bad, could be better'
],
labels: ['positive', 'negative', 'neutral'],
apiKey: process.env.INFERENCE_API_KEY
});
// Get results for each text
results.forEach((result, i) => {
console.log(`Text ${i + 1}: ${result.predicted_label} (${result.confidence}%)`);
});
```
**That's it.** Text, images, or both. Single items or batches. Any labels you want. Results in milliseconds.
## 🤔 The Problem
Building classification usually means:
- ❌ Collecting thousands of labeled examples
- ❌ Training models for hours/days
- ❌ Managing ML infrastructure
- ❌ Retraining when you need new categories
**zerolabel solves this in one line of code.**
## 🤔 The Solution
```typescript
import { classify } from 'zerolabel';
// Classify single or multiple texts at once
const results = await classify({
texts: [
'I love this product!',
'This is terrible quality',
'Not bad, could be better'
],
labels: ['positive', 'negative', 'neutral'],
apiKey: process.env.INFERENCE_API_KEY
});
// Get results for each text
results.forEach((result, i) => {
console.log(`Text ${i + 1}: ${result.predicted_label} (${result.confidence}%)`);
});
```
**That's it.** No training, no infrastructure, no complexity.
## ⚡ Installation
```bash
npm install zerolabel
```
## 🚀 Examples
### Text Classification (Single or Batch)
```typescript
// Process multiple texts efficiently
await classify({
texts: [
'Amazing product!',
'Worst purchase ever',
'It\'s okay',
'Best value for money',
'Would not recommend'
],
labels: ['positive', 'negative', 'neutral'],
apiKey: process.env.INFERENCE_API_KEY
});
// Or just one text
await classify({
texts: ['Single text to classify'],
labels: ['positive', 'negative', 'neutral'],
apiKey: process.env.INFERENCE_API_KEY
});
```
### Image Classification
```typescript
await classify({
images: ['data:image/jpeg;base64,...'],
labels: ['cat', 'dog', 'bird'],
apiKey: process.env.INFERENCE_API_KEY
});
```
### Both Together (Multimodal)
```typescript
await classify({
texts: ['Check out this cute animal!'],
images: ['data:image/jpeg;base64,...'],
labels: ['cute cat', 'cute dog', 'not cute'],
apiKey: process.env.INFERENCE_API_KEY
});
```
### Custom Categories
```typescript
await classify({
texts: ['Fix login bug', 'Add dark mode', 'Server is down!'],
labels: ['bug_report', 'feature_request', 'incident'],
apiKey: process.env.INFERENCE_API_KEY
});
```
### Batch Processing Made Easy
Process thousands of texts efficiently in a single API call:
```typescript
import { classify } from 'zerolabel';
// Classify entire datasets at once
const reviews = [
"Amazing product, highly recommend!",
"Terrible quality, waste of money",
"It's okay, nothing special",
"Best purchase I've made this year",
"Would not buy again",
// ... thousands more
];
const results = await classify({
texts: reviews,
labels: ['positive', 'negative', 'neutral'],
apiKey: process.env.INFERENCE_API_KEY
});
// Process results
results.forEach((result, index) => {
console.log(`Review ${index + 1}: ${result.predicted_label} (${result.confidence}%)`);
});
// Or analyze by label distribution
const distribution = results.reduce((acc, result) => {
acc[result.predicted_label] = (acc[result.predicted_label] || 0) + 1;
return acc;
}, {});
console.log('Sentiment distribution:', distribution);
```
**Benefits of batch processing:**
- ✅ **Faster**: Single API call vs. hundreds of individual requests
- ✅ **Cost-effective**: Reduced API overhead and latency
- ✅ **Simple**: Same API, just pass an array
- ✅ **Scalable**: Handle datasets of any size
## 🎯 Real-World Use Cases
| Use Case | Labels | Input |
|----------|--------|-------|
| **Email Triage** | `['urgent', 'normal', 'spam']` | Single email or batch of emails |
| **Content Moderation** | `['safe', 'nsfw', 'spam']` | User posts + images (single or batch) |
| **Support Tickets** | `['bug', 'feature', 'question']` | Ticket descriptions (process entire queue) |
| **Document Classification** | `['invoice', 'receipt', 'contract']` | Document images (single or batch) |
| **Sentiment Analysis** | `['positive', 'negative', 'neutral']` | Reviews/feedback (analyze all at once) |
## 🏗️ How It Works
1. **You provide**: Text/images and your custom labels
2. **We handle**: The AI model (Google Gemma 3-27B), prompting, and inference
3. **You get**: Instant predictions with confidence scores
<div align="center">
<img src="https://iili.io/Fzk38wx.webp" alt="Powered by Inference.net" width="120">
<p><em>Powered by inference.net infrastructure</em></p>
</div>
## 📊 Response Format
```javascript
[
{
"text": "I love this product!",
"predicted_label": "positive",
"confidence": 95.2,
"probabilities": {
"positive": 0.952,
"negative": 0.048
}
}
]
```
## 🔧 Configuration
```typescript
import { ZeroLabelClient } from 'zerolabel';
const client = new ZeroLabelClient({
apiKey: process.env.INFERENCE_API_KEY,
maxRetries: 3
});
const results = await client.classify({
texts: ['Hello world'],
labels: ['greeting', 'question']
});
```
## 🔑 Getting Your API Key
1. Sign up at [inference.net](https://inference.net)
2. Get your API key from the dashboard
3. Set it as `INFERENCE_API_KEY` environment variable
```bash
export INFERENCE_API_KEY="your-key-here"
```
## 💡 Why zerolabel?
| Traditional ML | zerolabel |
|----------------|-----------|
| Weeks to collect data | ✅ **Instant** |
| Hours to train models | ✅ **No training needed** |
| Complex infrastructure | ✅ **One npm install** |
| Fixed categories | ✅ **Any labels you want** |
| Expensive compute | ✅ **Pay per request** |
## 🌟 Live Demo
Try it yourself: **[zerolabel.dev](https://zerolabel.dev)**
## 📚 API Reference
### `classify(options)`
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `texts` | `string[]` | No* | Array of texts to classify (single or multiple) |
| `images` | `string[]` | No* | Array of base64 image data URIs |
| `labels` | `string[]` | ✅ | Your classification categories |
| `apiKey` | `string` | ✅ | Your inference.net API key (set as INFERENCE_API_KEY) |
| `criteria` | `string` | No | Additional classification criteria |
*At least one of `texts` or `images` is required
## 🛠️ TypeScript Support
Full TypeScript definitions included:
```typescript
import type {
ClassificationInput,
ClassificationResult,
ZeroLabelConfig
} from 'zerolabel';
```
## ❓ FAQ
**Q: What models does this use?**
A: Google Gemma 3-27B, optimized for classification tasks.
**Q: How accurate is it?**
A: Comparable to fine-tuned models for most classification tasks, especially with descriptive labels.
**Q: Can I process multiple texts at once?**
A: Yes! Pass an array of texts and get results for each one in a single API call.
**Q: Can I use custom models?**
A: No, we use inference.net's infrastructure with optimized models for best performance.
**Q: Is there a rate limit?**
A: Limits depend on your inference.net plan.
## 🤝 Contributing
Issues and PRs welcome! See our [GitHub repo](https://github.com/mrmps/zerolabel).
## 📄 License
MIT - Use it however you want!
<div align="center">
<p>Made with ❤️ for developers who want AI classification without the complexity</p>
<p>
<a href="https://zerolabel.dev">Website</a> •
<a href="https://github.com/mrmps/zerolabel">GitHub</a> •
<a href="https://npmjs.com/package/zerolabel">npm</a>
</p>
</div>