UNPKG

@clduab11/gemini-flow

Version:

Revolutionary AI agent swarm coordination platform with Google Services integration, multimedia processing, and production-ready monitoring. Features 8 Google AI services, quantum computing capabilities, and enterprise-grade security.

724 lines (586 loc) 17.7 kB
# Getting Started with Google Services Integration ## Quick Start Guide Get up and running with Gemini-Flow's Google Services integration in under 10 minutes. ### Prerequisites - Node.js 18.0.0 or higher - npm 8.0.0 or higher - Google account with access to desired services - Basic understanding of TypeScript/JavaScript ### Installation ```bash # Install Gemini-Flow with Google Services support npm install @clduab11/gemini-flow # Optional: Install specific Google service dependencies npm install googleapis google-auth-library @google-cloud/vertexai ``` ### Quick Setup #### 1. Google AI API Key (Fastest Start - 2 minutes) ```bash # Get your API key from https://aistudio.google.com/app/apikey export GEMINI_API_KEY="AIzaSyD-your-api-key-here" # Test the integration npx gemini-flow ai generate "Hello, world!" --model gemini-1.5-flash ``` #### 2. Initialize Project ```typescript import { GeminiFlow } from '@clduab11/gemini-flow'; const geminiFlow = new GeminiFlow({ apiKey: process.env.GEMINI_API_KEY }); // Your first AI generation const result = await geminiFlow.ai.generate({ model: 'gemini-1.5-flash', prompt: 'Explain quantum computing in simple terms' }); console.log(result.content); ``` ## Service-Specific Getting Started ### Google AI & Gemini Models #### Setup ```typescript import { GoogleAI } from '@clduab11/gemini-flow/ai'; const ai = new GoogleAI({ apiKey: process.env.GEMINI_API_KEY }); ``` #### Basic Usage ```typescript // Text generation const response = await ai.generate({ model: 'gemini-1.5-flash', prompt: 'Write a product description for eco-friendly water bottles', temperature: 0.7, maxTokens: 200 }); console.log(response.content); ``` #### Streaming Response ```typescript const stream = ai.generateStream({ model: 'gemini-1.5-pro', prompt: 'Tell me a story about space exploration' }); for await (const chunk of stream) { process.stdout.write(chunk.content); } ``` #### Multimodal Input ```typescript import fs from 'fs'; const imageData = fs.readFileSync('image.jpg'); const response = await ai.generate({ model: 'gemini-1.5-pro', prompt: 'Describe what you see in this image', media: [{ type: 'image', data: imageData.toString('base64'), mimeType: 'image/jpeg' }] }); ``` #### Function Calling ```typescript const tools = [{ name: 'get_weather', description: 'Get current weather for a location', parameters: { type: 'object', properties: { location: { type: 'string' }, unit: { type: 'string', enum: ['celsius', 'fahrenheit'] } }, required: ['location'] } }]; const response = await ai.generate({ model: 'gemini-1.5-pro', prompt: 'What\'s the weather like in Tokyo?', tools, toolChoice: 'auto' }); // Handle tool calls if (response.toolCalls) { for (const call of response.toolCalls) { if (call.name === 'get_weather') { const weatherData = await getWeatherData(call.arguments.location); console.log(`Weather in ${call.arguments.location}:`, weatherData); } } } ``` ### Google Workspace Integration #### Setup OAuth 2.0 ```typescript import { GoogleWorkspace } from '@clduab11/gemini-flow/workspace'; const workspace = new GoogleWorkspace({ clientId: 'your-client-id.apps.googleusercontent.com', clientSecret: 'your-client-secret', redirectUri: 'http://localhost:3000/callback' }); // Initialize OAuth flow const authUrl = workspace.getAuthUrl(); console.log('Visit this URL to authorize:', authUrl); // After user authorization, exchange code for tokens const tokens = await workspace.getTokens(authorizationCode); await workspace.initialize(tokens); ``` #### Google Drive Operations ```typescript // Search for files const files = await workspace.drive.search('type:document modified > 2025-01-01'); console.log(`Found ${files.length} documents:`); files.forEach(file => { console.log(`- ${file.name} (${file.lastModified})`); }); // Create a new document const doc = await workspace.docs.create({ title: 'Meeting Notes - Q1 Planning', content: ` # Q1 Planning Meeting ## Agenda 1. Review previous quarter 2. Set Q1 objectives 3. Resource allocation ## Action Items - [ ] Finalize budget proposal - [ ] Schedule team meetings ` }); console.log(`Document created: ${doc.url}`); ``` #### Google Sheets Analysis ```typescript // Analyze spreadsheet data const analysis = await workspace.sheets.analyze({ spreadsheetId: '1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms', range: 'Sheet1!A1:Z1000', analysisType: 'statistical' }); console.log('Data Analysis Results:'); console.log(`Total rows: ${analysis.rowCount}`); console.log(`Summary: ${analysis.summary}`); ``` ### Vertex AI Platform #### Setup Service Account ```typescript import { VertexAI } from '@clduab11/gemini-flow/vertex-ai'; const vertexAI = new VertexAI({ projectId: 'your-gcp-project-id', location: 'us-central1', credentials: { // Service account JSON or use ADC type: 'service_account', project_id: 'your-project', private_key: '-----BEGIN PRIVATE KEY-----...', client_email: 'service-account@project.iam.gserviceaccount.com' } }); ``` #### Enterprise AI Operations ```typescript // List available models const models = await vertexAI.listModels(); console.log('Available models:', models.map(m => m.name)); // Make predictions const prediction = await vertexAI.predict({ model: 'text-bison@001', instances: [{ prompt: 'Analyze the following customer feedback for sentiment...' }], parameters: { temperature: 0.2, maxOutputTokens: 256 } }); console.log('Prediction result:', prediction); ``` ### Future Google Services (Beta Access) #### Video Generation (Veo3) ```typescript import { VideoGeneration } from '@clduab11/gemini-flow/multimedia'; const video = new VideoGeneration({ projectId: 'your-project-id', credentials: serviceAccountCredentials }); // Generate video const task = await video.generate({ prompt: 'A peaceful lake at sunset with mountains in the background', duration: 10, // seconds resolution: '1080p', style: 'cinematic' }); // Monitor generation progress console.log(`Generation started: ${task.taskId}`); const result = await video.waitForCompletion(task.taskId); console.log(`Video ready: ${result.videoUrl}`); ``` #### Image Generation (Imagen 4) ```typescript import { ImageGeneration } from '@clduab11/gemini-flow/multimedia'; const images = new ImageGeneration({ projectId: 'your-project-id', credentials: serviceAccountCredentials }); const result = await images.generate({ prompt: 'A modern office space with natural lighting and plants', style: 'photorealistic', aspectRatio: '16:9', quality: 'high', samples: 4 }); result.images.forEach((image, index) => { console.log(`Generated image ${index + 1}: ${image.url}`); }); ``` ## Common Integration Patterns ### 1. Content Creation Pipeline ```typescript class ContentCreationPipeline { constructor( private ai: GoogleAI, private workspace: GoogleWorkspace ) {} async createBlogPost(topic: string): Promise<string> { // 1. Generate outline const outline = await this.ai.generate({ model: 'gemini-1.5-pro', prompt: `Create a detailed outline for a blog post about: ${topic}`, temperature: 0.7 }); // 2. Generate full content const content = await this.ai.generate({ model: 'gemini-1.5-pro', prompt: `Write a complete blog post based on this outline: ${outline.content}`, temperature: 0.6, maxTokens: 2000 }); // 3. Create Google Doc const doc = await this.workspace.docs.create({ title: `Blog Post: ${topic}`, content: content.content }); // 4. Generate and add images const imagePrompt = `Create an illustration for a blog post about ${topic}`; const image = await this.generateImage(imagePrompt); if (image) { await this.workspace.docs.addImage(doc.id, image.url, { position: 'top' }); } return doc.url; } private async generateImage(prompt: string): Promise<{ url: string } | null> { try { // This would use Imagen 4 when available const result = await this.images.generate({ prompt }); return result.images[0]; } catch (error) { console.warn('Image generation failed:', error.message); return null; } } } ``` ### 2. Data Analysis Workflow ```typescript class DataAnalysisWorkflow { constructor( private ai: GoogleAI, private workspace: GoogleWorkspace ) {} async analyzeSpreadsheet(spreadsheetId: string): Promise<string> { // 1. Extract data from spreadsheet const data = await this.workspace.sheets.getValues({ spreadsheetId, range: 'A1:Z1000' }); // 2. Perform AI-powered analysis const analysis = await this.ai.generate({ model: 'gemini-1.5-pro', prompt: ` Analyze this spreadsheet data and provide insights: ${JSON.stringify(data.values.slice(0, 100))} // First 100 rows Please provide: 1. Summary of the data 2. Key trends and patterns 3. Anomalies or outliers 4. Recommendations `, temperature: 0.2 }); // 3. Create analysis report const report = await this.workspace.docs.create({ title: `Data Analysis Report - ${new Date().toLocaleDateString()}`, content: ` # Data Analysis Report ## Source Spreadsheet ID: ${spreadsheetId} Analysis Date: ${new Date().toLocaleString()} ## Analysis Results ${analysis.content} ## Raw Data Summary - Total Rows: ${data.values.length} - Total Columns: ${data.values[0]?.length || 0} - Data Range: ${data.range} ` }); return report.url; } } ``` ### 3. Customer Support Automation ```typescript class CustomerSupportBot { constructor( private ai: GoogleAI, private workspace: GoogleWorkspace ) {} async handleSupportTicket(ticketData: SupportTicket): Promise<void> { // 1. Analyze customer query const analysis = await this.ai.generate({ model: 'gemini-1.5-flash', prompt: ` Analyze this customer support ticket: Subject: ${ticketData.subject} Message: ${ticketData.message} Customer: ${ticketData.customerEmail} Classify the issue type and urgency level. Suggest a response strategy. `, temperature: 0.3 }); // 2. Generate response draft const response = await this.ai.generate({ model: 'gemini-1.5-pro', prompt: ` Based on this analysis: ${analysis.content} Draft a professional, helpful response to the customer. Be empathetic and provide clear next steps. `, temperature: 0.4 }); // 3. Create ticket tracking document const ticketDoc = await this.workspace.docs.create({ title: `Support Ticket - ${ticketData.id}`, content: ` # Support Ticket ${ticketData.id} ## Customer Information - Email: ${ticketData.customerEmail} - Date: ${new Date().toLocaleString()} ## Issue Details **Subject:** ${ticketData.subject} **Message:** ${ticketData.message} ## AI Analysis ${analysis.content} ## Suggested Response ${response.content} ## Status - [x] Received - [ ] Responded - [ ] Resolved - [ ] Closed ` }); // 4. Log ticket in tracking spreadsheet await this.workspace.sheets.appendRow({ spreadsheetId: 'support-tickets-spreadsheet-id', range: 'Tickets!A:E', values: [[ ticketData.id, ticketData.customerEmail, ticketData.subject, 'Open', new Date().toISOString() ]] }); } } ``` ## Error Handling Best Practices ### Graceful Degradation ```typescript class RobustGoogleServiceClient { constructor( private ai: GoogleAI, private workspace: GoogleWorkspace ) {} async generateContentWithFallback(prompt: string): Promise<string> { const strategies = [ // Primary: Gemini 1.5 Pro () => this.ai.generate({ model: 'gemini-1.5-pro', prompt }), // Fallback 1: Gemini 1.5 Flash () => this.ai.generate({ model: 'gemini-1.5-flash', prompt }), // Fallback 2: Simplified prompt () => this.ai.generate({ model: 'gemini-1.5-flash', prompt: this.simplifyPrompt(prompt), maxTokens: 500 }) ]; for (const strategy of strategies) { try { const result = await strategy(); return result.content; } catch (error) { console.warn(`Strategy failed: ${error.message}`); continue; } } throw new Error('All generation strategies failed'); } private simplifyPrompt(prompt: string): string { // Reduce complex prompts to basic versions return prompt.slice(0, 100) + '...'; } } ``` ### Retry Logic ```typescript class RetryableGoogleService { async executeWithRetry<T>( operation: () => Promise<T>, maxRetries = 3, baseDelay = 1000 ): Promise<T> { let lastError: Error; for (let attempt = 1; attempt <= maxRetries; attempt++) { try { return await operation(); } catch (error) { lastError = error; if (!this.isRetryable(error) || attempt === maxRetries) { throw error; } const delay = baseDelay * Math.pow(2, attempt - 1); console.log(`Attempt ${attempt} failed, retrying in ${delay}ms`); await this.sleep(delay); } } throw lastError!; } private isRetryable(error: any): boolean { const retryableCodes = [ 'INTERNAL_ERROR', 'SERVICE_UNAVAILABLE', 'RESOURCE_EXHAUSTED', 'DEADLINE_EXCEEDED' ]; return retryableCodes.includes(error.code) || error.status >= 500; } private sleep(ms: number): Promise<void> { return new Promise(resolve => setTimeout(resolve, ms)); } } ``` ## Performance Optimization ### Request Batching ```typescript class BatchProcessor { private batches = new Map<string, any[]>(); private timers = new Map<string, NodeJS.Timeout>(); addToBatch(service: string, request: any): Promise<any> { return new Promise((resolve, reject) => { const batch = this.batches.get(service) || []; batch.push({ request, resolve, reject }); this.batches.set(service, batch); this.scheduleExecution(service); }); } private scheduleExecution(service: string): void { if (this.timers.has(service)) { return; // Already scheduled } const timer = setTimeout(() => { this.executeBatch(service); this.timers.delete(service); }, 100); // 100ms batch window this.timers.set(service, timer); } private async executeBatch(service: string): Promise<void> { const batch = this.batches.get(service) || []; if (batch.length === 0) return; this.batches.set(service, []); try { const results = await this.executeBatchRequest(service, batch); batch.forEach((item, index) => { item.resolve(results[index]); }); } catch (error) { batch.forEach(item => { item.reject(error); }); } } private async executeBatchRequest(service: string, batch: any[]): Promise<any[]> { // Service-specific batch implementation switch (service) { case 'sheets-read': return this.batchSheetsRead(batch); case 'drive-metadata': return this.batchDriveMetadata(batch); default: throw new Error(`Unsupported batch service: ${service}`); } } } ``` ### Caching Strategy ```typescript import { LRUCache } from 'lru-cache'; class CachedGoogleService { private cache = new LRUCache<string, any>({ max: 1000, ttl: 1000 * 60 * 15 // 15 minutes }); async getCachedResult<T>( key: string, fetcher: () => Promise<T>, ttl?: number ): Promise<T> { const cached = this.cache.get(key); if (cached) { return cached; } const result = await fetcher(); this.cache.set(key, result, { ttl }); return result; } async generateWithCache(prompt: string, model: string): Promise<any> { const cacheKey = `ai:${model}:${this.hashPrompt(prompt)}`; return this.getCachedResult( cacheKey, () => this.ai.generate({ model, prompt }), 1000 * 60 * 60 * 24 // 24 hours for AI generations ); } private hashPrompt(prompt: string): string { return require('crypto') .createHash('md5') .update(prompt) .digest('hex'); } } ``` ## Next Steps 1. **Explore Advanced Features** - Multi-agent orchestration - Complex workflow automation - Real-time collaboration features 2. **Production Deployment** - Set up monitoring and alerting - Implement proper error handling - Configure rate limiting and quotas 3. **Security Hardening** - Review authentication configuration - Implement proper access controls - Set up audit logging 4. **Scale Optimization** - Implement caching strategies - Set up load balancing - Configure auto-scaling For more detailed examples and advanced use cases, see the [Code Examples](./code-examples/) section and [API Reference](./api-reference/). ## Support - **Documentation**: [Complete API Documentation](https://docs.gemini-flow.dev) - **GitHub**: [Issues and Discussions](https://github.com/clduab11/gemini-flow) - **Community**: [Discord Server](https://discord.gg/gemini-flow) ## What's Next? Continue to the [Code Examples](./code-examples/) section for more detailed implementation examples, or jump to [Authentication Setup](./authentication-guide.md) for production-ready authentication configuration.