@voicefeedback/sdk
Version:
Modern voice feedback SDK with beautiful UI components and AI-powered analysis
425 lines (336 loc) β’ 11.3 kB
Markdown
# VoiceFeedback SDK
[](https://badge.fury.io/js/%40voicefeedback%2Fsdk)
[](https://opensource.org/licenses/MIT)
[](http://www.typescriptlang.org/)
A modern, elegant voice recording SDK for collecting user feedback with advanced AI analysis. Features beautiful UI components, real-time transcription, sentiment analysis, and topic extraction.
## β¨ Features
- π¨ **Modern UI Components** - Beautiful, customizable buttons with multiple variants
- π€ **High-Quality Recording** - Crystal clear audio capture with browser compatibility
- π€ **AI-Powered Analysis** - Real-time transcription, sentiment analysis, and topic extraction
- β‘ **Easy Integration** - Simple API for React and vanilla JavaScript
- π± **Responsive Design** - Works perfectly on desktop, tablet, and mobile
- π§ **Highly Customizable** - Multiple styles, sizes, and configuration options
- π **TypeScript Support** - Full type definitions included
## π Quick Start
### Installation
```bash
npm install @voicefeedback/sdk
```
### React Component (Recommended)
```jsx
import { VoiceFeedbackButton } from '@voicefeedback/sdk/react';
function App() {
return (
<VoiceFeedbackButton
apiKey="your-api-key"
variant="primary"
size="medium"
shape="rounded"
onComplete={(result) => {
console.log('Feedback received:', result);
// Handle the feedback data
}}
onError={(error) => {
console.error('Error:', error);
}}
/>
);
}
```
### Vanilla JavaScript
```javascript
import VoiceFeedback from '@voicefeedback/sdk';
const recorder = new VoiceFeedback({
apiKey: 'your-api-key',
apiUrl: 'https://quicksass.cool.newstack.be/api', // Default server
onComplete: (result) => {
console.log('Transcript:', result.transcript);
console.log('Sentiment:', result.sentiment);
console.log('Topics:', result.topics);
},
onError: (error) => {
console.error('Recording error:', error);
}
});
// Start recording
await recorder.startRecording();
// Stop recording (optional - user can click button to stop)
recorder.stopRecording();
```
## π Complete API Reference
### VoiceFeedback Class
#### Constructor Options
```typescript
interface VoiceFeedbackConfig {
apiKey: string; // Your VoiceFeedback API key (required)
apiUrl?: string; // Custom API URL (default: https://api.voicefeedback.com/v1)
webhookUrl?: string; // Webhook URL for real-time notifications
language?: string; // Language code (default: 'en')
maxDuration?: number; // Max recording duration in seconds (default: 300)
debug?: boolean; // Enable debug logging (default: false)
onStart?: () => void; // Called when recording starts
onStop?: () => void; // Called when recording stops
onComplete?: (result) => void; // Called when processing is complete
onError?: (error) => void; // Called on errors
}
```
#### Methods
```typescript
// Start voice recording
await voiceFeedback.startRecording(): Promise<void>
// Stop voice recording
voiceFeedback.stopRecording(): void
// Get current recording status
voiceFeedback.getStatus(): { isRecording: boolean; duration: number }
// Test API key validity
await voiceFeedback.testApiKey(): Promise<{ valid: boolean; message: string }>
// Check browser compatibility
VoiceFeedback.isSupported(): boolean
// Quick start helper
VoiceFeedback.quickStart(apiKey: string, options?: Partial<VoiceFeedbackConfig>): Promise<VoiceFeedback>
```
#### Response Format
```typescript
interface VoiceFeedbackResult {
id: string; // Unique feedback ID
transcript: string; // Full transcription
sentiment: 'positive' | 'negative' | 'neutral';
sentimentScore: number; // Score from -1 to 1
topics: string[]; // Extracted topics/keywords
emotions?: string[]; // Detected emotions (if available)
duration: number; // Recording duration in seconds
language: string; // Detected/specified language
processingTime: number; // API processing time in ms
}
```
### React Hooks & Components
#### useVoiceFeedback Hook
```typescript
const {
startRecording, // Function to start recording
stopRecording, // Function to stop recording
isRecording, // Boolean recording state
duration, // Current recording duration in seconds
isSupported, // Browser compatibility check
error // Any errors that occurred
} = useVoiceFeedback(options: VoiceFeedbackHookOptions);
```
#### VoiceFeedbackButton Component
```jsx
<VoiceFeedbackButton
apiKey="vf_your_api_key"
onComplete={(result) => console.log(result)}
onError={(error) => console.error(error)}
// Customization options
recordingText="Recording..."
idleText="Share Feedback"
className="custom-class"
style={{ padding: '12px 24px' }}
disabled={false}
// All VoiceFeedbackConfig options are also supported
webhookUrl="https://your-app.com/webhook"
language="en"
maxDuration={180}
debug={true}
/>
```
## π Language Support
The SDK supports automatic language detection and transcription in 50+ languages including:
| Language | Code | Language | Code | Language | Code |
|----------|------|----------|------|----------|------|
| English | `en` | Spanish | `es` | French | `fr` |
| German | `de` | Italian | `it` | Portuguese | `pt` |
| Dutch | `nl` | Russian | `ru` | Chinese | `zh` |
| Japanese | `ja` | Korean | `ko` | Arabic | `ar` |
```javascript
const vf = new VoiceFeedback({
apiKey: 'vf_your_api_key',
language: 'es', // Spanish
onComplete: (result) => {
console.log('TranscripciΓ³n:', result.transcript);
}
});
```
## π§ Advanced Configuration
### Custom API URL
For self-hosted or custom deployments:
```javascript
const vf = new VoiceFeedback({
apiKey: 'vf_your_api_key',
apiUrl: 'https://your-custom-api.com/v1',
debug: true
});
```
### Webhook Integration
Receive real-time notifications when feedback is processed:
```javascript
// Frontend
const vf = new VoiceFeedback({
apiKey: 'vf_your_api_key',
webhookUrl: 'https://your-app.com/api/voice-feedback',
onComplete: (result) => {
// Handle in frontend
updateUI(result);
}
});
// Backend (Express.js example)
app.post('/api/voice-feedback', express.json(), async (req, res) => {
const { transcript, sentiment, topics } = req.body;
// Process the feedback
await db.feedback.create({
transcript,
sentiment,
topics,
userId: req.user.id,
createdAt: new Date()
});
res.json({ success: true });
});
```
### Error Handling
```javascript
const vf = new VoiceFeedback({
apiKey: 'vf_your_api_key',
onError: (error) => {
switch(error.message) {
case 'Microphone access denied':
showMicrophonePermissionDialog();
break;
case 'API request failed: 401':
showApiKeyErrorDialog();
break;
default:
showGenericErrorDialog(error.message);
}
}
});
```
## π§ͺ Testing & Development
### Integration Test
Test your API key and configuration:
```bash
cd node_modules/@voicefeedback/sdk
npm run test-integration vf_your_api_key_here --debug
```
### Browser Compatibility Check
```javascript
if (!VoiceFeedback.isSupported()) {
console.log('Browser does not support voice recording');
// Show alternative feedback method
}
```
### Debug Mode
Enable debug logging during development:
```javascript
const vf = new VoiceFeedback({
apiKey: 'vf_your_api_key',
debug: true // Shows detailed logs in console
});
```
## π± Browser Support
- β
Chrome 60+
- β
Firefox 55+
- β
Safari 14+
- β
Edge 79+
- β
Mobile Chrome/Safari
- β Internet Explorer (not supported)
## π Security & Privacy
- All audio data is transmitted securely over HTTPS
- Audio is processed on secure servers and not stored permanently
- API keys should be kept secure and not exposed in client-side code
- Consider using environment variables for API keys in production
## π οΈ Common Integration Patterns
### 1. Customer Support Widget
```javascript
// Floating feedback button
const vf = new VoiceFeedback({
apiKey: 'vf_your_api_key',
onComplete: (result) => {
// Create support ticket
createSupportTicket({
transcript: result.transcript,
sentiment: result.sentiment,
priority: result.sentiment === 'negative' ? 'high' : 'normal'
});
}
});
// Add floating button to page
const floatingButton = document.createElement('button');
floatingButton.innerHTML = 'π€ Feedback';
floatingButton.style.position = 'fixed';
floatingButton.style.bottom = '20px';
floatingButton.style.right = '20px';
floatingButton.onclick = () => vf.startRecording();
document.body.appendChild(floatingButton);
```
### 2. Product Feedback Collection
```javascript
// Post-purchase feedback
const vf = new VoiceFeedback({
apiKey: 'vf_your_api_key',
onComplete: (result) => {
// Analyze product feedback
analyzeProductFeedback({
productId: currentProduct.id,
transcript: result.transcript,
sentiment: result.sentiment,
topics: result.topics
});
}
});
```
### 3. User Research & Testing
```javascript
// UX research session
const vf = new VoiceFeedback({
apiKey: 'vf_your_api_key',
maxDuration: 600, // 10 minutes for longer sessions
onComplete: (result) => {
// Save research data
saveResearchSession({
sessionId: currentSession.id,
transcript: result.transcript,
emotions: result.emotions,
topics: result.topics
});
}
});
```
## π Troubleshooting
### Common Issues
**Recording doesn't start:**
- Check microphone permissions in browser
- Verify HTTPS is being used (required for microphone access)
- Test API key with `testApiKey()` method
**Poor transcription quality:**
- Ensure quiet environment
- Check microphone quality
- Verify correct language is set
**API errors:**
- Verify API key is correct and active
- Check network connectivity
- Review API rate limits
### Debug Information
Enable debug mode to see detailed logging:
```javascript
const vf = new VoiceFeedback({
apiKey: 'vf_your_api_key',
debug: true
});
```
## π€ Support & Community
- π [Documentation](https://voicefeedback.com/docs)
- π¬ [Discord Community](https://discord.gg/voicefeedback)
- π [GitHub Issues](https://github.com/voicefeedback/sdk/issues)
- π§ [Email Support](mailto:support@voicefeedback.com)
## π License
MIT License - see [LICENSE](LICENSE) file for details.
## π What's Next?
- Real-time streaming transcription
- Custom voice commands
- Speaker identification
- Advanced emotion detection
- Multi-language detection in single recording
---
**Made with β€οΈ by the VoiceFeedback team**
[Get Started](https://voicefeedback.com) β’ [API Docs](https://voicefeedback.com/docs) β’ [Examples](https://github.com/voicefeedback/examples)