flamflix-sdk
Version:
Flamflix SDK - Video player with alpha channel support and audio processing (BETA)
559 lines (425 loc) • 14 kB
Markdown
# Flamflix SDK
> **BETA VERSION** - This package is currently in beta/experimental phase. It may contain bugs, have breaking changes, or incomplete features. Use with caution in production environments.
A powerful JavaScript SDK for creating AI talking avatar applications. Enable users to ask questions through voice input and receive video responses from AI avatars with alpha channel support, real-time audio processing, and seamless integration.
## Beta Notice
This is a **beta version** of the Flamflix SDK. Please be aware that:
- **May contain bugs** - Report issues on GitHub
- **Breaking changes possible** - API may change between beta versions
- **Experimental features** - Some features may not be fully stable
- **Feedback welcome** - Help us improve by reporting bugs and suggesting features
## Features
- **AI Talking Avatar**: Create interactive AI avatars that respond to voice input
- **Alpha Channel Video Support**: Render videos with transparency using advanced canvas processing
- **Voice Input Recording**: Built-in microphone recording with customizable interaction modes
- **Real-time Processing**: Convert audio recordings to AI video responses
- **Secure Authentication**: Access token-based API integration
- **Responsive Design**: Works seamlessly across desktop and mobile devices
- **Multiple Build Formats**: ES modules, CommonJS, and UMD builds available
- **Fully Customizable**: Complete control over styling, positioning, and behavior
- **Zero Dependencies**: Lightweight and self-contained
- **Framework Agnostic**: Works with any JavaScript framework or vanilla JS
- **Mobile Optimized**: Touch-friendly controls and responsive design
- **TypeScript Support**: Full type definitions included
## Installation
### NPM/Yarn (Recommended)
```bash
npm install flamflix-sdk
# or
yarn add flamflix-sdk
```
### CDN (Script Tag)
```html
<script src="https://unpkg.com/flamflix-sdk/dist/flamflix-sdk.umd.min.js"></script>
```
## Quick Start
### Basic HTML Setup
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>AI Avatar Demo</title>
<style>
.avatar-container {
width: 600px;
height: 400px;
border: 2px solid #ddd;
border-radius: 12px;
margin: 20px auto;
position: relative;
}
</style>
</head>
<body>
<div id="avatar-container" class="avatar-container"></div>
<script src="https://unpkg.com/flamflix-sdk/dist/flamflix-sdk.umd.min.js"></script>
<script>
// Initialize AI Avatar
const sdk = new FlamflixSDK("your-access-token", "your-project-id");
const player = sdk.init("#avatar-container", {
micMode: "push-to-talk",
micPosition: "bottom-right",
});
console.log("AI Avatar ready for interaction");
</script>
</body>
</html>
```
### ES Modules (Modern JavaScript)
```javascript
import { FlamflixSDK } from "flamflix-sdk";
// Create SDK instance
const sdk = new FlamflixSDK("your-access-token", "your-project-id");
// Initialize AI avatar player
const player = sdk.init("#avatar-container", {
micMode: "hybrid",
micPosition: "top-right",
useDefaultMic: true,
});
// Optional: Monitor interaction states
setInterval(() => {
if (player.isRecording()) {
console.log("User is speaking...");
} else if (player.isProcessing()) {
console.log("AI is processing response...");
}
}, 500);
```
### React Integration
```jsx
import React, { useRef, useEffect, useState } from "react";
import { FlamflixSDK } from "flamflix-sdk";
function AIAvatar({ accessToken, projectId }) {
const containerRef = useRef(null);
const [player, setPlayer] = useState(null);
const [isRecording, setIsRecording] = useState(false);
const [isProcessing, setIsProcessing] = useState(false);
useEffect(() => {
if (containerRef.current && accessToken && projectId) {
const sdk = new FlamflixSDK(accessToken, projectId);
const playerInstance = sdk.init(containerRef.current, {
micMode: "push-to-talk",
micPosition: "bottom-center",
useDefaultMic: true,
});
setPlayer(playerInstance);
// Monitor states
const interval = setInterval(() => {
setIsRecording(playerInstance.isRecording());
setIsProcessing(playerInstance.isProcessing());
}, 200);
return () => clearInterval(interval);
}
}, [accessToken, projectId]);
return (
<div>
<div
ref={containerRef}
style={{
width: "100%",
height: "500px",
position: "relative",
border: "2px solid #e0e0e0",
borderRadius: "12px",
}}
/>
<div style={{ textAlign: "center", marginTop: "10px" }}>
{isRecording && <span>🔴 Listening...</span>}
{isProcessing && <span>🤔 AI is thinking...</span>}
{!isRecording && !isProcessing && <span>💬 Ready to chat</span>}
</div>
</div>
);
}
export default AIAvatar;
```
## Configuration Options
### SDK Constructor
```javascript
const sdk = new FlamflixSDK(accessToken, projectId);
```
**Parameters:**
- `accessToken` (string, required) - Your API access token
- `projectId` (string, required) - Your project identifier
### Initialization Options
```javascript
const player = sdk.init(container, options);
```
**Container Parameter:**
- `container` (HTMLElement | string | React.RefObject) - Container element, CSS selector, or React ref
**Options Object:**
| Option | Type | Default | Description |
| --------------- | ---------------------------------------- | ---------------- | ---------------------------------------------- |
| `useDefaultMic` | boolean | `true` | Whether to show the built-in microphone button |
| `micMode` | `'toggle' \| 'push-to-talk' \| 'hybrid'` | `'push-to-talk'` | Microphone interaction mode |
| `micPosition` | string | `'top-right'` | Position of the default microphone button |
### Microphone Modes
#### Toggle Mode (`'toggle'`)
Traditional click-to-start, click-to-stop behavior.
```javascript
const player = sdk.init("#container", {
micMode: "toggle",
});
```
**Behavior:**
- First click: Start recording
- Second click: Stop recording and process audio
- Best for: Longer conversations, traditional UI patterns
#### Push-to-Talk Mode (`'push-to-talk'`) - Default
Hold down to record, release to stop and send.
```javascript
const player = sdk.init("#container", {
micMode: "push-to-talk", // Default mode
});
```
**Behavior:**
- Hold down: Start recording after 200ms
- Release: Stop recording and process audio
- Quick taps (< 200ms): Ignored (prevents accidental recordings)
- Best for: Quick interactions, mobile apps
#### Hybrid Mode (`'hybrid'`)
Combines both behaviors - short tap for toggle, long press for push-to-talk.
```javascript
const player = sdk.init("#container", {
micMode: "hybrid",
});
```
**Behavior:**
- Quick tap (< 500ms): Toggle mode (start/stop recording)
- Long press (≥ 500ms): Push-to-talk mode (hold to record)
- Best for: Power users who want flexibility
### Microphone Positions
Available positions for the default microphone button:
- `'top-left'`
- `'top-right'` (default)
- `'top-center'`
- `'bottom-left'`
- `'bottom-right'`
- `'bottom-center'`
- `'left-center'`
- `'right-center'`
- `'center'`
```javascript
const player = sdk.init("#container", {
micPosition: "bottom-center",
});
```
## API Reference
### PlayerAPI Methods
The `PlayerAPI` object returned by `init()` provides the following methods:
#### Playback Control
**`play()`** → `Promise<void>`
Start video playback.
```javascript
await player.play();
```
**`pause()`** → `void`
Pause video playback.
```javascript
player.pause();
```
**`reset()`** → `void`
Reset player to initial state. Stops recording, clears video, removes UI elements.
```javascript
player.reset();
```
#### Recording Control
**`startRecording()`** → `void`
Start audio recording programmatically.
```javascript
player.startRecording();
```
**`stopRecording()`** → `void`
Stop audio recording and process the audio.
```javascript
player.stopRecording();
```
**`getRecordedAudio()`** → `Blob | null`
Get the last recorded audio as a Blob.
```javascript
const audioBlob = player.getRecordedAudio();
if (audioBlob) {
console.log("Audio size:", audioBlob.size);
}
```
**`playRecordedAudio()`** → `Promise<void>`
Play the last recorded audio.
```javascript
try {
await player.playRecordedAudio();
} catch (error) {
console.error("No recorded audio available");
}
```
#### Volume Control
**`setVolume(volume)`** → `void`
Set the video volume (0.0 to 1.0).
```javascript
player.setVolume(0.5); // 50% volume
```
**`getVolume()`** → `number`
Get the current video volume.
```javascript
const volume = player.getVolume();
```
#### State Queries
**`isLoading()`** → `boolean`
Check if the player is currently loading.
```javascript
if (player.isLoading()) {
console.log("Player is loading...");
}
```
**`isRecording()`** → `boolean`
Check if audio recording is currently active.
```javascript
const recording = player.isRecording();
```
**`isProcessing()`** → `boolean`
Check if recorded audio is being processed into video response.
```javascript
const processing = player.isProcessing();
```
**`getMicMode()`** → `'toggle' | 'push-to-talk' | 'hybrid'`
Get the current microphone mode.
```javascript
const mode = player.getMicMode();
```
**`setMicMode(mode)`** → `boolean`
Change the microphone mode dynamically.
```javascript
const success = player.setMicMode("hybrid");
```
**`getDefaultMicConfig()`** → `object`
Get information about the default microphone configuration.
```javascript
const config = player.getDefaultMicConfig();
// Returns: { useDefaultMic, micPosition, micMode, isDefaultMicVisible }
```
## Custom Microphone Implementation
For advanced users who want to implement their own microphone UI:
```javascript
// Initialize without default mic
const player = sdk.init("#container", {
useDefaultMic: false,
});
// Create custom microphone button
const customMicButton = document.createElement("button");
customMicButton.textContent = "Ask AI";
customMicButton.style.cssText = `
position: absolute;
bottom: 20px;
left: 50%;
transform: translateX(-50%);
padding: 15px 30px;
background: linear-gradient(45deg, #007bff, #0056b3);
color: white;
border: none;
border-radius: 25px;
cursor: pointer;
font-size: 16px;
font-weight: bold;
`;
// Handle recording with custom button
customMicButton.addEventListener("mousedown", () => {
if (!player.isRecording()) {
player.startRecording();
customMicButton.textContent = "Release to Send";
customMicButton.style.background =
"linear-gradient(45deg, #dc3545, #c82333)";
}
});
customMicButton.addEventListener("mouseup", () => {
if (player.isRecording()) {
player.stopRecording();
customMicButton.textContent = "Processing...";
customMicButton.style.background =
"linear-gradient(45deg, #6c757d, #5a6268)";
}
});
// Add to container
document.getElementById("container").appendChild(customMicButton);
// Monitor processing state
setInterval(() => {
if (!player.isRecording() && !player.isProcessing()) {
customMicButton.textContent = "Ask AI";
customMicButton.style.background =
"linear-gradient(45deg, #007bff, #0056b3)";
}
}, 500);
```
## TypeScript Support
The SDK includes full TypeScript definitions. No additional packages required.
```typescript
import { FlamflixSDK, FlamflixOptions, PlayerAPI } from "flamflix-sdk";
// Type-safe configuration
const options: FlamflixOptions = {
useDefaultMic: true,
micMode: "hybrid", // Autocomplete and type checking
micPosition: "bottom-center",
};
// Type-safe API usage
const sdk = new FlamflixSDK("token", "project-id");
const player: PlayerAPI = sdk.init("#container", options);
// All methods are fully typed
const isRecording: boolean = player.isRecording();
const volume: number = player.getVolume();
```
## Use Cases
### Customer Support Avatar
```javascript
const supportAvatar = sdk.init("#support-container", {
micMode: "push-to-talk",
micPosition: "bottom-right",
});
```
### Educational AI Tutor
```javascript
const tutorAvatar = sdk.init("#tutor-container", {
micMode: "toggle",
micPosition: "center",
});
```
### Interactive Kiosk
```javascript
const kioskAvatar = sdk.init("#kiosk-container", {
micMode: "hybrid",
micPosition: "bottom-center",
});
```
## Browser Support
- Chrome 70+
- Firefox 65+
- Safari 12+
- Edge 79+
**Requirements:**
- MediaRecorder API support
- WebAudio API support
- Canvas 2D context support
## Error Handling
```javascript
try {
const sdk = new FlamflixSDK("token", "project-id");
const player = sdk.init("#container");
// Handle recording errors
player.startRecording();
} catch (error) {
if (error.message.includes("Container not found")) {
console.error("Invalid container element");
} else if (error.message.includes("Access token")) {
console.error("Invalid credentials");
} else {
console.error("Initialization failed:", error);
}
}
```
## Performance Considerations
- The SDK automatically manages video buffering for optimal performance
- Audio processing happens asynchronously to maintain UI responsiveness
- Canvas rendering is optimized with frame rate limiting
- Memory usage is minimized with automatic cleanup
## License
MIT License - see LICENSE file for details.
## Support
For questions, bug reports, or feature requests, please contact our support team or visit our documentation portal.